人工知能学会論文誌
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
原著論文
単語統計を損失関数に取り入れた深層学習による多様な雑談対話生成
上山 彩夏狩野 芳伸
著者情報
ジャーナル フリー

2022 年 37 巻 2 号 p. G-L62_1-10

詳細
抄録

In recent years, there has been a lot of research on building dialogue systems using deep learning, which can generate relatively fluent response sentences to user utterances. Nevertheless, they tend to produce responses that are not diverse and which are less context-dependent. Assuming that the problem is caused by the Softmax Cross- Entropy (SCE) loss, which treats all words equally without considering the imbalance in the training data, a loss function Inverse Token Frequency (ITF) loss, which multiplies the SCE loss by a weight based on the inverse of the token frequency, was proposed and confirmed the improvement of dialogue diversity. However, in the diversity of sentences, it is necessary to consider not only the information of independent tokens, but also the frequency of incorporating a sequence of tokens. Using frequencies that incorporate a sequence of tokens to compute weights that dynamically change depending on the context, we can better represent the diversity we seek. Therefore, we propose a loss function, Inverse N-gram Frequency (INF) loss, which is weighted based on the inverse of the n-gram frequency of the tokens instead of the frequency of the tokens. In order to confirm the effectiveness of the proposed method on INF loss, we conducted metric-based and human evaluations of sentences automatically generated by models trained on the Japanese and English Twitter datasets. In the metric-based evaluation, Perplexity, BLEU, DIST-N, ROUGE, and length were used as evaluation indices. In the human evaluation, we assessed the coherence and diversity of the response sentences. In the metric-based evaluation, the proposed INF model achieved higher scores in Perplexity, DIST-N, and ROUGE than the previous methods. In the human evaluation, the INF model also showed superior values.

著者関連情報
© 人工知能学会2022
前の記事 次の記事
feedback
Top