2016 Volume 23 Issue 3 Pages 267-299
Abduction is also known as Inference to the Best Explanation. It has long been considered as a promising framework for natural language processing (NLP). While recent advances in the techniques of automatic world knowledge acquisition warrant developing large-scale knowledge bases, the computational complexity of abduction hinders its application to real-life problems. In particular, when a knowledge base contains functional literals, which express the dependency relation between words, the size of the search space will substantially increase. In this study, we propose a method to enhance the efficiency of first-order abductive reasoning. By exploiting the property of functional literals, the proposed method prunes inferences that do not lead to reasonable explanations. Furthermore, we prove that the proposed method is sound under a particular condition. In our experiment, we apply abduction having a large-scale knowledge base to a real-life NLP task. We show that our method significantly improves the computational efficiency of first-order abductive reasoning when compared with a state-of-the-art system.