2020 Volume 35 Issue 4 Pages E-K25_1-17
We propose a method to assist legislative drafters that locates inappropriate legal terms in Japanese statutorysentences and suggests corrections. We focus on sets of mistakable legal terms whose usages are defined in legislationdrafting rules. Our method predicts suitable legal terms using a classifier based on BERT (Bidirectional EncoderRepresentations from Transformers). The BERT classifier is pretrained with a huge number of whole sentences; thus,it contains abundant linguistic knowledge. Classifiers for predicting legal terms suffer from two-level infrequency:term-level infrequency and set-level infrequency. The former causes a class imbalance problem and the latter causesan underfitting problem; both degrade classification performance. To overcome these problems, we apply threetechniques, namely, preliminary domain adaptation, repetitive soft undersampling, and classifier unification. Thepreliminary domain adaptation improves overall performance by providing prior knowledge of statutory sentences,the repetitive soft undersampling overcomes term-level infrequency, and the classifier unification overcomes set-levelinfrequency while saving storage consumption. Our experiments show that our classifier outperforms conventionalclassifiers using Random Forest or language models, and that all three training techniques improve performance.