2024 Volume 31 Issue 4 Pages 1427-1457
Pre-trained Language Models (PLMs) can answer known problems using acquired knowledge and natural language understanding capability from pre-training, while unknown problems require pure inference capabilities to answer. To evaluate pure inference capabilities, we need to separately consider memorization capability, which is difficult with existing datasets due to its known information in PLMs. This study targets Knowledge Graph Completion (KGC), predicting unknown relations (links) from known ones in the knowledge graphs. Traditional embedding-based KGC methods predict missing links from pure inference capability, while recent PLM-based KGC methods also utilize knowledge obtained in pre-training. Therefore, KGC is suitable for evaluating the effect of memorization capability and inference capability. We propose a method to construct datasets for measuring the performance of memorized knowledge and inference capability in KGC. We discuss whether PLMs make inferences based on memorized knowledge about entities and its conclusion suggests that PLMs also learn inference capabilities for unknown problems.