Host: The Japanese Society for Artificial Intelligence
Name : The 36th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 36
Location : [in Japanese]
Date : June 14, 2022 - June 17, 2022
Neural language models such as XLNet have been successfully applied used to solve reading comprehension problems. However, the process of solving these problems is not well understood. In this study, we analyzed the output of XLNet on irrelevant sentence removal problem using SHAP, which provides “explanations" for machine learning models. In an irrelevant removal problem, a text including an unnecessary sentence is given, and the sentence has to be eliminated to make the text more coherent. SHAP calculates the importance of each input element as an “explanation" about the model’s decision. We analyzed the output of SHAP qualitatively and quantitatively. As a result, we found that (1) XLNet captures the naturalness of the sentences before and after the eliminated sentences, (2) XLNet heavily depends on periods and commas, and (3) XLNet is strongly affected by a small number of words such as adverbs.