2017 Volume 29 Issue 3 Pages 595-604
In natural language processing, estimation of the meaning of a sentence is one of the important issues. Various mathematical expression methods have been proposed for declarative sentences such as syntactic information and distributed representation like word2vec. However, mathematical expression methods considering dialogue context have not been proposed. This paper proposes new neural networks to estimate the features of dialogue context as distributed representation. We improve two kind of conventional neural networks employed in word2vec for sentences, and combine them to generate distributed representation. In the first neural network, Long Short-Term Memory (LSTM), one of recurrent neural networks, is employed as an encoder and learning to estimate the words (Bag-of-words) contained in the previous and the next utterances of an input utterance is carried out. In the second neural network, several LSTMs are employed as an encoder and learning to estimate the words (Bag-of-words) contained in the next utterance is carried out. Encoders in both of the neural networks are combined to construct a new neural network to generate the distributed representation for new input utterance. We carried out two kinds of experiments to evaluate proposed networks: dialogue breakdown detection and dialogue act prediction. In both of the experiments, the proposed network shows higher accuracy than the conventional methods.