With the evolution of the global Internet, it has become increasingly common for companies to automatically exchange various data among themselves. Moreover, content providers, such as broadcasting stations, are being required to change their content-serving strategy so that the content can reach awaiting viewers via various external services. To address this strategy change, we propose in this paper program-related information as machine-readable web data that can be used in external services. We report on the construction of the program information database using the linked open data (LOD) technique recommended by the World Wide Web Consortium. Based on experimental results, we determine that services employing a variety of program information can be realized by representing knowledge about the content as LOD data.
In converting open data which are published in form of CSV("Comma Separeted Values") or spread sheet into Linked Data, especially to use the data in combination with other Linked Data, they should be written with unified vocaburaly and include links to other data sets. We report technologies to support converting open data into Linked Data by automating vocabulary unification and linking to other linked data. For vocaburaly unification, we show vocaburaly suggestion technology which searches existing LOD("Linked Open Data") and infers a suitable vocaburaly set for a given CSV. For automatic linking, we show identity inference technology which calculates degree of simularity between entities in CSV and LOD. These technologies help people converting there data into Linked Data, but we notice that we need to build a mechanism to incorporate human knowledge in order to correct errors.
Intellectual activity "Open Data" will be formed by standardized "information model" to integrate data from distributed systems rather than enclosing the data into a single system. Office and productivity applications will feature measurability as well as function and user interface.
In this paper , w e propose an automatic construction method for ontology of Kampo using Table Structure and EDR Electronic Dictionary . By using Table Structure and EDR Electronic Dictionary it is possible to automatically build a reliable ontology In addition th e proposed construction is also effective for concept s that are not ontology as of yet We aim to have an automatic construction of ontology that specializes in the Kampo field that ontology is not progressing in.
We proposed an ontology refinement system which can make more consistent ontology by fixing the differences of the gain size between is-a hierarchies. In this paper, we will discuss this refinement sys tem and an evaluation technique.
RDF data stores need to search for metadata quickly from large scale RDF data, such as Gene Ontology and DBpedia. We present a compressed index structure of RDF data in order to develop a fast in-memory RDF database system (called FROST) that compactly stores RDF triples. We show the advantage of FROST, using the LUBM dataset (a benchmarking framework for semantic repositories).
There are multiple triple stores that accept SPARQL queries, but not all of them support SPARQL 1.1 which was published as W3C Recommendation in 2013. Developers of these stores continuously update their implementations, and a new version is released one after another. In this situation, we want to know which triple store supports some specifications such as the SERVICE or VALUES keywords newly introduced in SPARQL 1.1. This paper introduces a system of testing each triple store by issuing a series of queries that covers a wide range of SPARQL 1.1 specifications. Each query has attributions of specifications that the query has using the SPARQL Inferencing Notation (SPIN) vocabulary, so that we can easily narrow down triple stores by choosing specific attributes and learn whether that triple store supports this attribute or not.
Unlike RDF data, it is not easy for users to build ontologies using the expressive description logic SROIQ. In this paper, we propose a SROIQ-concept constructing algorithm for RDF data. This algorithm contains minimal model reasoning for RDF graphs that is based on a minimal model for SROIQ-concepts on the closed world assumption. Moreover, we implement a SROIQ-concept query system and SROIQ-concept learning for simple RDF data using the concept constructing algorithm.