Journal of Natural Language Processing
Online ISSN : 2185-8314
Print ISSN : 1340-7619
ISSN-L : 1340-7619
Volume 25, Issue 2
Displaying 1-4 of 4 articles from this issue
Preface
Paper
  • Yusuke Oda, Philip Arthur, Graham Neubig, Koichiro Yoshino, Satoshi Na ...
    2018 Volume 25 Issue 2 Pages 167-199
    Published: March 15, 2018
    Released on J-STAGE: June 15, 2018
    JOURNAL FREE ACCESS

    In this paper, we propose a new method for calculating the output layer in neural machine translation systems with largely reduced computation cost based on binary code. The method is performed by predicting a bit array instead of actual output symbols to obtain word probabilities, and can reduce computation time/memory requirements of the output layer to be logarithmic in vocabulary size in the best case. In addition, since learning proposed model is more difficult than softmax models, we also introduce two approaches to improve translation quality of the proposed model: combining softmax and our models and using error-correcting codes. Experiments on English-Japanese bidirectional translation tasks show proposed models achieve that their BLEU approach the softmax, while reducing memory usage on the order of one tenths, and also improving decoding speed on CPUs by x5 to x10.

    Download PDF (1223K)
  • Gongye Jin, Daisuke Kawahara, Sadao Kurohashi
    2018 Volume 25 Issue 2 Pages 201-221
    Published: March 15, 2018
    Released on J-STAGE: June 15, 2018
    JOURNAL FREE ACCESS

    This paper presents a method for improving semantic role labeling (SRL) using a large amount of automatically acquired knowledge. We acquire two varieties of knowledge, which we call surface case frames and deep case frames. Although the surface case frames are compiled from syntactic parses and can be used as rich syntactic knowledge, they have limited capability for resolving semantic ambiguity. To compensate for the deficiency of the surface case frames, we compile deep case frames from automatic semantic roles. We also consider quality management for both types of knowledge in order to get rid of the noise brought from the automatic analyses. The experimental results show that Chinese SRL can be improved using automatically acquired knowledge and the quality management shows a positive effect on this task.

    Download PDF (408K)
  • Tomoyuki Kajiwara, Mamoru Komachi
    2018 Volume 25 Issue 2 Pages 223-249
    Published: March 15, 2018
    Released on J-STAGE: June 15, 2018
    JOURNAL FREE ACCESS

    Several studies on automated text simplification are based on a large-scale monolingual parallel corpus constructed from a comparable corpus comprising complex text and simple text. However, constructing a parallel corpus for text simplification is expensive as large-scale simplified corpora are not available in many languages other than English. Therefore, we propose an unsupervised method that automatically builds a pseudo-parallel corpus to train a text simplification model. Our framework combines readability assessment and sentence alignment and automatically constructs a text simplification corpus from only a raw corpus. Experimental results show that a statistical machine translation model trained using our corpus can generate simpler synonymous sentences performing comparably to models trained using a large-scale simplified corpus.

    Download PDF (715K)
feedback
Top