Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
36th (2022)
Session ID : 3Yin2-53
Conference information

A Study on End-to-End Training for Empathetic Dialogue Generation
*Takeshi HOMMASoichi KAGEYAMAMana ISHIDANaokazu UCHIDAHajime MORIMakoto IWAYAMAYasuhiro SOGAWA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

To realize detailed customization of empathy in responses of open-domain dialogue systems, we compare methods of creation of finetuning data for dialogue model training; (1) dialogue example-based finetuning, (b) dialogue act-based finetuning, and (c) prototype-based finetuning. Based on subjective experiments on the quality of dialogue responses, we found that the most successful method is the dialogue example-based finetuning, where a small number (one hundred) of utterance-response pairs including empathetic responses are used to finetune a pretrained dialogue model. The dialogue act-based finetuning, where the finetuning data is created by extracting empathetic responses from a noisy dialogue dataset, improved the quality only if an automatic empathetic response extractor is trained using dialogue data in the same domain as target one. The prototype-based finetuning, where a few (ten) response examples are used to find suitable finetuning data from a noisy dialogue dataset, did not improve the quality.

Content from these authors
© 2022 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top