2018 Volume 25 Issue 2 Pages 167-199
In this paper, we propose a new method for calculating the output layer in neural machine translation systems with largely reduced computation cost based on binary code. The method is performed by predicting a bit array instead of actual output symbols to obtain word probabilities, and can reduce computation time/memory requirements of the output layer to be logarithmic in vocabulary size in the best case. In addition, since learning proposed model is more difficult than softmax models, we also introduce two approaches to improve translation quality of the proposed model: combining softmax and our models and using error-correcting codes. Experiments on English-Japanese bidirectional translation tasks show proposed models achieve that their BLEU approach the softmax, while reducing memory usage on the order of one tenths, and also improving decoding speed on CPUs by x5 to x10.