Host: The Japanese Society for Artificial Intelligence
Name : 34th Annual Conference, 2020
Number : 34
Location : Online
Date : June 09, 2020 - June 12, 2020
In recent years, sequence to sequence models such as Seq2Seq and Transformer have been commonplace for dialogue architectures. On the other hand, it is necessary for more natural and intellectual dialogue to understand context and use knowledge. However, scientists have argued that such models are limited in their ability to store data over a long time. To retain the long-term information, neural network models with external memories such as End-To-End Memory Networks and Differentiable Neural Computer (DNC) have been proposed. In this work, we extend DNC architectures and propose a model using both context and structured knowledge. We conducted an experiment on a dataset which is composed of a series of coherently linked questions that require a large scale knowledge graph and their answers. The mean test error rate was 69.25% after 20k iterations and a little higher than the original DNC's error rate 69.09%.