Host: The Japanese Society for Artificial Intelligence
Name : The 36th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 36
Location : [in Japanese]
Date : June 14, 2022 - June 17, 2022
To realize detailed customization of empathy in responses of open-domain dialogue systems, we compare methods of creation of finetuning data for dialogue model training; (1) dialogue example-based finetuning, (b) dialogue act-based finetuning, and (c) prototype-based finetuning. Based on subjective experiments on the quality of dialogue responses, we found that the most successful method is the dialogue example-based finetuning, where a small number (one hundred) of utterance-response pairs including empathetic responses are used to finetune a pretrained dialogue model. The dialogue act-based finetuning, where the finetuning data is created by extracting empathetic responses from a noisy dialogue dataset, improved the quality only if an automatic empathetic response extractor is trained using dialogue data in the same domain as target one. The prototype-based finetuning, where a few (ten) response examples are used to find suitable finetuning data from a noisy dialogue dataset, did not improve the quality.