2021 Volume 28 Issue 2 Pages 450-478
Neural Machine Translation (NMT) has shown drastic improvement in its quality when translating clean input such as text from the news domain. However, existing studies suggest that NMT still struggles with certain kinds of input with considerable noise, such as User-Generated Contents (UGC) on the Internet. To make better use of NMT for cross-cultural communication, one of the most promising directions is to develop a translation model that correctly handles these informal expressions. Though its importance has been recognized, it is still not clear as to what creates the large performance gap between the translation of clean input and that of UGC. To answer the question, we present a new dataset, PheMT, for evaluating robustness of MT systems against specific linguistic phenomena in Japanese-English translation. We provide more fine-grained error analysis about the behavior of the models with the accuracy and relative drop in translation quality on the contrastive dataset specifically designed for each phenomenon. Our experiments with the dataset revealed that not only our in-house models but even widely used off-the-shelf systems are greatly disturbed by the presence of certain phenomena.