JSAI Technical Report, Type 2 SIG
Online ISSN : 2436-5556
Effectiveness of LLM Agents Based on User False Belief Correction: Analysis Using the BDI Model
Zanwei WANGYuta ASHIHARATakashi OMORIMasahiko OSAWA
Author information
RESEARCH REPORT / TECHNICAL REPORT FREE ACCESS

2024 Volume 2024 Issue AGI-027 Pages 233-241

Details
Abstract

Large Language Models (LLMs) are capable of sophisticated language understanding, but they can work without correcting instructions based on users' incorrect beliefs (misconceptions). The purpose of this study is to clarify the problem of not correcting false beliefs. First, we tested an LLM agent using the BDI model to clarify the instructions to correct a user's incorrect beliefs when the user is assumed to have them. In the experiment, we compared 14 cases of false beliefs with and without correction of false beliefs. The results showed that in the case of no belief estimation, 8 suggestions were made in response to the user's desire, but in all 14 cases, including estimation and correction of false beliefs, the user's desire was met. These results suggest that it is difficult to respond to the user's wishes without appropriately correcting the user's false beliefs.

Content from these authors
© 2024 Authors
Previous article Next article
feedback
Top