Host: The Japanese Society for Artificial Intelligence
Name : The 100th SIG-SLUD
Number : 100
Location : [in Japanese]
Date : February 29, 2024 - March 01, 2024
Pages 101-106
Large language models (LLMs) are increasingly being used for various language processing tasks. However, LLMs have been pointed out to have the hallucination problem of generating information that are inconsistent with the facts. To solve this problem, methods have been proposed to construct hallucination detectors and correctors using machine learning. However, these methods have not yet solved the problem adequately, because false detections by detectors and excessive corrections by correctors have occurred. On the other hand, there are methods that use LLMs themselves to detect and correct hallucination, but these methods use pipeline processing with multi prompts, and thus do not provide an essential solution to false positives and over-corrections. In this study, we propose a post-editing method using a single prompt LLM. Focusing on the hallucination of numerals and proper nouns, we compared the proposed method with existing methods and confirmed its effectiveness.