Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
38th (2024)
Session ID : 2T5-OS-5b-01
Conference information

Evaluating the Effectiveness of Metacognitive Prompting in Causal Inference Using Large Language Models
*Ryusei OHTANIYuko SAKURAISatoshi OYAMA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Causal inference using large language models (LLMs) has become an important research topic in recent years. In addition, research and development on prompt engineering has been actively conducted to improve the accuracy of LLMs responses. In particular, metacognitive prompting that apply human introspective thinking are known to significantly improve response accuracy in various tasks. In this study, we evaluate the effectiveness of metacognitive prompting on necessary/sufficient cause decision problems. The results show that metacognitive prompting was not necessarily effective. On the other hand, it is found that we can lead to the correct answers to the judgment problems which cannot be solved at all by using the metacognitive prompting, by providing multiple examples of similar problems with correct answers.

Content from these authors
© 2024 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top