Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
37th (2023)
Session ID : 3T1-GS-6-04
Conference information

Improving Method for Extracting Causality from Text with Multi-step Fine-tuning of GPT-3 using Wikidata
*Taketo OHIRAShun SHIRAMATSU
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Causal knowledge is necessary to develop a facilitator agent that can understand the points of discussion and the opinions of the participants. However, causal knowledge contained in Wikidata, a famous knowledge graph, is not sufficient. Therefore, in this study, we have attempted to extend the training data for GPT-3 re-training using Wikidata's causal knowledge as a method for cause extraction. As a result, we had confirmed that the accuracy was improved over the conventional method. In this paper, we hypothesized that multi-stage retraining, rather than mere data expansion, would improve accuracy, and verified this hypothesis through experiments. The results showed that multi-step re-training improved extraction accuracy compared to mere data expansion. Furthermore, we designed a "generality'' scale to determine whether the extracted causes are widely known to the general public, and confirmed the trend that the "generality'' of causal relationships that are widely known to the general public is higher.

Content from these authors
© 2023 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top