Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
37th (2023)
Session ID : 3L5-GS-11-03
Conference information

Uncertainty and Explanation-based Human Debugging of Text Classification Model
*Masato OTAFaisal HADIPUTRA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

AI democratization is advancing quickly through the availability of NLP pre-trained models. As a consequence, data scientists, as well as subject matter experts (SME), are moving towards using data-driven AI products to solve their problems. Utilizing these products requires NLP model understanding and continuous accuracy improvement, skills only data scientists have. However, data scientists are not always involved. Establishing a flow that allows SMEs to improve model accuracy independently is essential. Therefore, we focus on debugging NLP models via human feedback, an approach addressed in Explainable AI. Humans provide feedback to the system based on the model explanation. The feedback can be varied, such as grouping similar samples or correcting invalid explanations. In our case, we aim to improve accuracy by domain-knowledge-aware data augmentation. In this study, we propose an efficient way to reduce the cost of manual data augmentation by exploiting uncertainties. We experimented with text classification tasks and verified that human feedback effectively improves model accuracy, and introducing uncertainties speeds up the augmentation and improves the data quality.

Content from these authors
© 2023 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top