Host: The Japanese Society for Artificial Intelligence
Name : 34th Annual Conference, 2020
Number : 34
Location : Online
Date : June 09, 2020 - June 12, 2020
Large-scale pretrained language models are shown to be effective for identifying argumentative structures of texts as well as a wide range of natural language processing tasks. However, recent studies show that these models exploit dataset-specific biases (henceforth, superficial cues) for prediction, and that suppressing it could further improve the generalization ability of these models. We first investigate superficial cues in Argument Annotated Essays (AAE), a widely used dataset for argument mining, and show that there exist superficial cues for argumentative link identification, the subtask of argumentative structure identification, in AAE. We then propose a simple method to suppress models’ dependence on superficial cues without any manual annotation efforts. Our experiments demonstrate that the proposed method has a potential to improve the generalization ability of argumentative link identification models.