人工知能学会論文誌
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
原著論文
Explainable Natural Language Inference in the Legal Domain via Text Generation
Jungmin ChoiUkyo HondaTaro WatanabeKentaro Inui
著者情報
ジャーナル フリー

2023 年 38 巻 3 号 p. C-MB6_1-11

詳細
抄録

Natural language inference (NLI) in the legal domain is the task of predicting entailment between the premise, i.e. law, and the hypothesis, which is a statement regarding a legal issue. Current state-of-the-art approaches to NLI with pre-trained language models do not perform well in the legal domain, presumably due to a discrepancy in the level of abstraction between the premise and hypothesis and the convoluted nature of legal language. Some of the difficulties specific to the legal domain are that 1) the premise and hypothesis tend to be extensive in length; 2) the premise comprises multiple rules, and only one of the rules is related to the hypothesis. Thus only small fractions of the statements are relevant for determining entailment, while the rest is noise, and; 3) the premise is often abstract and written in legal terms, whereas the hypothesis is a concrete case and tends to be written with more ordinary vocabulary. These problems are accentuated by the scarcity of such data in the legal domain due to the high cost.

Pretrained language models have been shown to be effective on natural language inference tasks in the legal domain. However, previous methods do not provide an explanation for the decisions, which is especially desirable in knowledge-intensive domains such as law.

This study proposes to leverage the characteristics of legal texts and decomposes the overall NLI task into two simpler sub-steps. Specifically, we regard the hypothesis as a pair of a condition and a consequence and train a conditional language model to generate the consequence from a given premise and a condition. The trained model can be regarded as a knowledge source for generating a consequence given the query consisting of the premise and the condition. After that, when the model receives an entailment example, it should generate a consequence similar to the original consequence, and when it is a contradiction example, they should be dissimilar since the model is trained on entailment examples only. Then, we compare the generated consequence and the consequence part of the hypothesis to see whether they are similar or dissimilar by training a classifier. Experimental results on datasets derived from the Japanese bar exam show significant improvement in accuracy from prior methods.

著者関連情報
© The Japanese Society for Artificial Intelligence 2023
前の記事 次の記事
feedback
Top