In recent years, there have been attempts to generate stories automatically by utilizing computers and various other kinds of information technology. However, it remains unclear how story sequences can be made to look natural and appear interesting. This study conducted a data description of 134 famous Japanese comic detective stories and listed 37 types of plot elements, 10 types of tricks, 9 types of criminal motives, and 10 types of relationships between victims and criminals, in order to realize a computational narratological analysis. Based on these parameters, eleven factors have been extracted through factor analysis. The structures of the plot transition networks varied according to the factors present in the detective stories. Therefore, an elaborate, human-like, and understandable creation of plot transitions can be enabled based on the detailed differences between plot transition networks.
This paper presents an attempt to provide a generic named-entity recognition and disambiguation module (NERD) called entity-fishing as a stable online service that demonstrates the possible delivery of sustainable technical services within DARIAH, the European digital research infrastructure for the arts and humanities. Deployed as part of the national infrastructure Huma-Num in France, this service provides an efficient state-of-the-art implementation coupled with standardised interfaces allowing an easy deployment on a variety of potential digital humanities contexts. Initially developed in the context of the FP9 EU project CENDARI, the software was well received by the user community and continued to be further developed within the H2020 HIRMEOS project where several open access publishers have integrated the service to their collections of published monographs as a means to enhance retrieval and access. entity-fishing implements entity extraction as well as disambiguation against Wikipedia and Wikidata entries. The service is accessible through a REST API which allows easier and seamless integration, language independent and stable convention and a widely used service-oriented architecture (SOA) design. Input and output data are carried out over a query data model with a defined structure providing flexibility to support the processing of partially annotated text or the repartition of text over several queries. The interface implements a variety of functionalities, like language recognition, sentence segmentation and modules for accessing and looking up concepts in the knowledge base. The API itself integrates more advanced contextual parametrisation or ranked outputs, allowing for the resilient integration in various possible use cases. The entity-fishing API has been used as a concrete use case to draft the experimental stand-off proposal, which has been submitted for integration into the TEI guidelines. The representation is also compliant with the Web Annotation Data Model (WADM). In this paper we aim at describing the functionalities of the service as a reference contribution to the subject of web-based NERD services. In this paper, we detail the workflow from input to output and unpack each building box in the processing flow. Besides, with a more academic approach, we provide a transversal schema of the different components taking into account non-functional requirements in order to facilitate the discovery of bottlenecks, hotspots and weaknesses. We also describe the underlying knowledge base, which is set up on the basis of Wikipedia and Wikidata content. We conclude the paper by presenting our solution for the service deployment: how and which the resources where allocated. The service has been in production since Q3 of 2017, and extensively used by the H2020 HIRMEOS partners during the integration with the publishing platforms.
The internet provides exciting opportunities for scholars of Japanese videogames. Fans from all over the world passionately collect, order, and share information about their hobby online. In this paper, we will show how we can exploit these potentials for research on Japanese videogames. We call this approach “duct-taping” databases: the integration of information from various heterogeneous, fragmentary online resources with the goal of creating a robust research dataset. We will discuss the potentials and challenges of this approach and show how it allows us to better understand the historical development of Japan’s videogame production.