2017 Volume E100.D Issue 12 Pages 2923-2930
On an inference-enabled Linked Open Data (LOD) endpoint, usually a query execution takes longer than on an LOD endpoint without inference engine due to its processing of reasoning. Although there are two separate kind of approaches, query modification approaches, and ontology modifications have been investigated on the different contexts, there have been discussions about how they can be chosen or combined for various settings. In this paper, for reducing query execution time on an inference-enabled LOD endpoint, we compare these two promising methods: query rewriting and ontology modification, as well as trying to combine them into a cluster of such systems. We employ an evolutionary approach to make such rewriting and modification of queries and ontologies based on the past-processed queries and their results. We show how those two approaches work well on implementing an inference-enabled LOD endpoint by a cluster of SPARQL endpoints.