We developed D2RQ Mapper (http://d2rq.dbcls.jp/), a web application to edit a mapping file of D2RQ, which is a middleware to bridge Relational Database (RDB) and Resource Description Framework (RDF). A D2RQ mapping file defines how to map data stored in an RDB to RDF in the turtle format, and to write it with a text editor is cumbersome. D2RQ Mapper assists you to edit it by contextualizing input forms in the target RDB schema and the mapping language. In addition, D2RQ Mapper supports the R2RML format to output a mapping definition. For users who need to access a target RDB within their intranets, we provide a Docker image of D2RQ Mapper.
This article describe a vocabulary set for translation of physical quantity and a proto-type application using it. The vocabulary is fomalized with RDF and used for representing facts and rules. Facts are such as the weight of the Statue of Liberty and rules are such that the energy of a cup of rice is 200k calory. The application system is designed for the education of the global warming. The system tranlate the volume of the green gas emission into the volume of gasolin, the weight of waste and so on.
Machine learning on RDF data has become important in the field of the Semantic Web. However, RDF graph structures are redundantly represented by noisy and incomplete data on the Web. In order to apply SVMs to such RDF data, we propose a kernel function to compute the similarity between resources on RDF graphs. This kernel function is defined by selected features on RDF paths that eliminate the redundancy on RDF graphs. Our experiments show the performance of the proposed kernel with SVMs on binary classification tasks for RDF resources.
We have already proposed WC3 (Wikipedia Category Consistency Checker) that supports to evaluate consistency of the category information in Wikipedia by using DBPedia information. In this paper, we propose a Japanese version WC3 that analyzes Japanese version Wikipedia by using Japanese Wikipedia. We discuss the problem of the English version WC3 and difference between the ammount of the metadata in English DBPedia and Japanese one. Based on the discussion, we propose a new algorithm to construct appropriate SPARQL queries for the Wikipedia categories. We also discuss the analysis result of the system.
There are many research activities concerning Linked Data, but Lined Data are not actively utilized for many purposes. One of the reasons is that it is not easy to express various kinds of information as Linked Data. Another reason is a lack of standard vocabularies for many domains. It is also desirable to specify the meaning of vocabularies in a formal way so that the meaning of Linked Data can be processed automatically. This paper proposes a method for specifying the meaning of vocabularies (in particular, predicates) used in Linked Data by using object-oriented modeling technologies and a formal specification language. With this method, standard vocabularies, with a formal specification of their meaning, can be generated for many domains, and the meaning of Linked Data generated with these vocabularies can be processed automatically to some extent.
The Ontology4DICOM (Ont4D) was developed for a DICOM image information object definition library, especialy for the DICOM image meta data curation. Ont4D was developed by using the ontology editor hozo. We have intended that Ont4D is mainly utilized for standardization of vendor-specific DICOM images in order to reduce inefficiency related to dicom data usage(such as DICOM image header morphing). Also, Ont4D data format is RDF/XML format which is defined by world wide web consorcium. This means that this ontology could applied by any data science research, such as open data, linked data, linked open data. Ont4D will become a key to enhance availability which related to DICOM header information in appropriate medical information systems.