A bidirectional transformation consists of pairs of transformations — a forward transformation produces a target view from a source, while a backward transformation puts back modifications on the view to the source — satisfying sensible roundtrip properties. Bidirectional transforms, originating from the view update problem in the database community, are gaining more and more interests from researchers in programming languages as a novel programming model for data synchronization, and researchers in software engineering as a new approach to evolutionary software development. In this article, we would like to outline the history, basic principles, tools, and applications of bidirectional transformations, from different views of programming languages, software engineering, and databases.
In guitar performance, fingering is an important factor, and complicated. In particular, the fingering of the left hand comprises various relationships between the finger and the string, such as a finger touching the strings, a finger pressing the strings，and a finger releasing the strings. The recognition and distinction of the precise fingering of the left hand is applied to a self-learning support system, which is able to to detect strings being muted by a finger, and which transcribes music automatically, including the details of fingering techniques. Therefore, the goal of our study is the design and implementation of a system for recognizing the touch of strings for the guitar. We propose a method for recognizing the touch of strings based on the conductive characteristics of strings and frets. We develop a prototype system, and evaluate its effectiveness. Furthermore, we propose an application which utilizes our system.
The approach using the joint goal model is proposed for software reuse. Common goals are important to build the joint goal model. However the related works do not mention techniques for identifying the common goals. In this paper, we propose a technique to identify common goals. They are using the similarity of goals and the three proposal rules. The experiment using the goal models of the domain of television and social network system and employment support system shows the accuracy of the proposal rules. The experiment using the goal models of the domain of camera shows the accuracy of the proposal technique.
Formal verification methods have often focused on qualitative properties such as constraints on the order of event occurrences. On the other hand, it is also required in practice to verify quantitative properties on performances of systems, to analyze the performances quantitatively, or to optimize them. In this article, we develop LTLmp that is an LTL extension with mean-payoff formulae for describing quantitative properties on long-run average cost and frequency of event occurrences. In addition, we develop effective algorithms to solve LTLmp model-checking, satisfiability-checking and optimization problems.
To date, various techniques for predicting fault-prone modules have been proposed; however, test strategies, which assign a certain amount of test effort to each module, have been rarely studied. This paper proposes a simulation model of software testing that can evaluate various test strategies. The simulation model estimates the number of discoverable faults with respect to the given test resources, the test strategy, complexity metrics of a set of modules to be tested, and the fault prediction results. Based on a case study of simulation applying fault prediction to two open source software (Eclipse and Mylyn), we show the relationship between the available test effort and the effective test strategy.
We focus on effort estimation based on the effort for early phase activities, and built effort estimation models using early phase effort as an explanatory variable, and compared the estimation accuracies of these models to the effort estimation models based on software size. In addition, we built estimation models using both early phase effort and software size. In our experiment, we used ISBSG dataset, which was collected from software development companies, and regarded planning phase effort and requirement analysis effort as early phase effort. The result of the experiment showed that when both software size and sum of planning and requirement analysis phase effort were used as explanatory variables, the estimation accuracy was most improved (Average Balanced Relative Error was improved to 75.4% from 148.4%).
Recently, commercial software products often incorporate OSS. Industrial developers often need to know plans of enhancement and bug fix for a specific feature of OSS when they should determine whether or not to incorporate it. However, it is difficult for outsiders to retrieve a person familiar with a specific feature in OSS due to the voluntary nature of the contributions. In this paper, we present a tool for visualizing version archives to support industrial developers who find OSS developers familiar with a specific feature. This tool applies topic analysis to version archives for characterizing activities of individual developer.
In this paper we present a survey on natural language corpora, with particular focus on corpora of large scale and those applicable to sentiment analysis. Natural language corpora are crucial for training various Software Engineering applications, from part-of-speech taggers and dependency parsers to dialog systems or sentiment analysis software. We compare several natural language corpora created for different languages, analyze their distinctive features and the amount of additional annotations provided by the developers of those corpora.