Journal of Natural Language Processing
Online ISSN : 2185-8314
Print ISSN : 1340-7619
ISSN-L : 1340-7619
Paper
Recurrent Neural Networks for Word Alignment
Akihiro TamuraTaro WatanabeEiichiro Sumita
Author information
JOURNAL FREE ACCESS

2015 Volume 22 Issue 4 Pages 289-312

Details
Abstract
This paper proposes a novel word alignment model based on a recurrent neural network (RNN), in which an unlimited alignment history is represented by recurrently connected hidden layers. In addition, we perform unsupervised learning inspired by (Dyer et al. 2011), which utilizes artificially generated negative samples. Our alignment model is directional, like the generative IBM models (Brown et al. 1993). To overcome this limitation, we encourage an agreement between the two directional models by introducing a penalty function, which ensures word embedding consistency across two directional models during training. The RNN-based model outperforms both the feed-forward NN-based model (Yang et al. 2013) and the IBM Model 4 under Japanese-English and French-English word alignment tasks, and achieves comparable translation performance to those baselines under Japanese-English and Chinese-English translation tasks.
Content from these authors
© 2015 The Association for Natural Language Processing
Previous article
feedback
Top