JSAI Technical Report, SIG-SLUD
Online ISSN : 2436-4576
Print ISSN : 0918-5682
100th (Feb.2024)
Conference information

Post-editing of Hallucinations by Prompt-tuning
Haruki HATAKEYAMAKeita MORIWAKIMasaki SHUZOEisaku MAEDA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Pages 101-106

Details
Abstract

Large language models (LLMs) are increasingly being used for various language processing tasks. However, LLMs have been pointed out to have the hallucination problem of generating information that are inconsistent with the facts. To solve this problem, methods have been proposed to construct hallucination detectors and correctors using machine learning. However, these methods have not yet solved the problem adequately, because false detections by detectors and excessive corrections by correctors have occurred. On the other hand, there are methods that use LLMs themselves to detect and correct hallucination, but these methods use pipeline processing with multi prompts, and thus do not provide an essential solution to false positives and over-corrections. In this study, we propose a post-editing method using a single prompt LLM. Focusing on the hallucination of numerals and proper nouns, we compared the proposed method with existing methods and confirmed its effectiveness.

Content from these authors
© 2024 The Japaense Society for Artificial Intelligence
Previous article Next article
feedback
Top