Host: The Japanese Society for Artificial intelligence
Name : The 97th SIG-SLUD
Number : 97
Location : [in Japanese]
Date : March 08, 2023 - March 09, 2023
Pages 50-55
When a dialogue system has a long-term conversation with a person, it is desirable to generate responses taking past dialogue sessions into account. However, the conversation logs used for training dialogue systems do not necessarily contain many responses considering the past dialogue context. Therefore, it is difficult to generate responses that fully respect the past dialogue context if the dialogue system is only trained by concatenating the past dialogue context with the current context. In this paper, we propose a multi-task learning method for response generation to force the dialogue system to consider the past context adequately. The auxiliary self-supervised task is to generate the system-side utterance included in the most similar past dialogue context to the current context. In the experiment, we trained our proposed models on the Mulit-session Twitter Dialogue Dataset and verified the effect of our data augmentation methods.