Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
37th (2023)
Session ID : 3T1-GS-6-02
Conference information

Verification of Chain-of-Thought Prompting in Japanese
*Kaito HORIOEiki MURATAHao WANGTatuya IDEDaisuke KAWAHARATakato YAMAZAKIKenta SHINZATOAkifumi NAKAMACHIShengzhe LIToshinori SATO
Author information
Keywords: NLP
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Foundation models can be adapted to various tasks by Few-Shot learning, which uses a small number of examples as a prompt. To improve Few-Shot learning, Chain-of-Thought (CoT) prompting has been proposed, which divides the process of thinking into steps. Although the effectiveness of CoT has been proved in English tasks requiring logical reasoning, it has not been verified in Japanese. We examine the effectiveness of CoT in Japanese using a Japanese foundation model, HyperCLOVA JP. We first construct Japanese datasets for the following three tasks: arithmetic, commonsense, and symbolic reasoning. Then, we conduct experiments using HyperCLOVA models of four different sizes. The results showed that CoT prompts were more accurate than standard prompts, and that the performance of CoT prompts was correlated with model size.

Content from these authors
© 2023 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top