Software must be continually evolved to keep up with users' needs. In this article, we propose a new taxonomy of software evolution. It consists of three perspectives: methods, targets, and objectives of evolution. We also present a literature review on software evolution based on our taxonomy. The result could provide a concrete baseline in discussing research trends and directions in the field of software evolution.
This tutorial introduces some recent research of software metrics and their applications in practice as a part of the turorial series on empirical software engineering. Especially, this tutorial focuses on GQM paradigm, which is a framework for making metrics responsive to goals of evaluation, contexts of evaluation targets, and some case studies that we have performed in the past.
This tutorial introduces Nash equilibrium which is the most important equilibrium concept in noncooperative games. In 2-player zero-sum normal form games, we can compute Nash equilibria in polynomial time in the number of pure strategies available to players. However, in complicated games in which players take turns choosing actions, the number of pure strategies becomes large. We explain algorithms that can compute Nash equilibria in complicated card games such as poker. On the other hand, it has not been shown whether Nash equlibria in 2-player general-sum games can be found in polynomial time. The concepts related to decision problems are not appropriate in order to discuss the computational complexity of finding Nash equilibria, since the existence of them has been already proven. Thus, we introduce the classes PPAD and PPAD-complete to characterize the complexity of computing Nash equilibria.
Self-adaptive systems, changing their functional behavior at runtime, provide the desired level of flexibility. Although various runtime frameworks have been studied, they tend to rely on a particular architecture. It is inadequate to study the intrinsic nature of self-adaptive systems. This paper presents an abstract, declarative framework for them and relates it to an adaptive PHP-based Web application architecture, which takes a model-based adaptation approach. Furthermore, the paper discusses the proposed approach with an example of simple EC-site application, and presents results of experiments conducted with the application.
Points-to analysis has a problem that its execution time is too high for real-world software due to its high complexity, although it is an important technology used for optimization and static program analysis. The high parallel performance of GPU is expected to accelerate the execution time of points-to analysis by using GPU, but it is still unclear how to achieve it. Trial and error are often required for proper use of GPU, since there are some restrictions on GPU hardware and its program environments, and the technical knowledge and skills for them are still lacking. To solve this problem, we provide a novel fast implementation method on GPU for an inclusion-based points-to analysis proposed by Andersen. The method includes various improvements on sparse-matrix and matrix operations for the analysis. We also provide its preliminary quantitative evaluation by comparing the execution time with its CPU version implementation on real-world medium-scale programs, such as GDB (49 kLOC). The result shows that our GPU implementation is faster than CPU implementation on all the benchmark programs and 14.4 times faster at the maximum case.
Without implementing needless functions, requirements that satisfy customers' needs need to be elicited in software development. However, a requirement that satisfies customers' needs does not always coincide with one that has high contribution for a goal of software development. Accordingly, conflict between contribution and validity on customers' needs should be detected at the step of requirements elicitation process before making requirements specification. Towards this problem, we have proposed a metric Dip(g) to detect this conflict based on goal-oriented requirements analysis method that is one of the requirements elicitation methods. But, because some application assessment of the method has not been reported, the application problems also have not been explicit. Hence, in this paper, we report that we discovered an application problem from the result of an application to an existing goal graph. Added to this, by reducing goal graph based on logical formulas, we propose a new method to resolve the discovered new problem.
Until now, authors have developed an information control system modeling language. This research proposes an application of the modeling language to a train operation control. We explain the modeling language compactly, model a train operation control and verify the model with Promela description and LTL. In fact, we found out some counterexamples of the model by this approach.
COP (Context-Oriented Programming) languages such as ContextJ* enable programmers to describe the context-aware behavior elegantly. The primary system behavior can be separated from the context-aware behavior. On the other hand, unfortunately, it becomes difficult to debug the programs due to the complexity of COP execution and the dependence between objects and contexts. To deal with this problem, this paper proposes CJAdviser, SMT-based debugging support for ContextJ*. In CJAdviser, the execution trace of a ContextJ* program is converted to a context dependence graph that can be analyzed by the SMT solver Yices. Using CJAdviser, we can check a variety of object-context dependencies.
Code reviews are useful quality assurance activities. However, it may be hard to perform thorough reviews in reality because of a limited budget and/or a short time to delivery. This paper focuses on the cost-effective planning of review, that is, how to make the most effective selection of modules to be preferentially-reviewed within a certain budget and/or time to delivery. Then the paper proposes to formulate the above module selection as a 0-1 programming problem which considers all of the modules' fault-proneness, review costs, and couplings with the others. The usefulness of the proposed method is discussed through the simulation with six open source software.
To apply model checking to the verification of an RTOS, we need to close the behavior of the RTOS by constructing an environment which calls the functions of the RTOS. Constructing a nondeterministic environment is a traditional approach for doing this. However, it is difficult to precisely describe the detailed behavior of the environment using such an environment. For the environment of an RTOS, we further need to consider the problem of their structural variation. That is, the environment of an RTOS consists of various numbers of tasks, resources, and priorities, which makes it realistically impossible to construct all of such variations exhaustively by hand. In this paper, we present a method to model the environmet based on UML. Our model (called an environment model) allows us to describe the structural variation and the behavior of the environment generally and easily using a class diagram and statechart diagrams. Our tool, the environment generator, can generate all the variations of the environment within the bound of the environment model.As a practical example, we applied our method to the verification of an OSEK/VDX OS design model and checked the correctness of its scheduling functions.
Lock operations and memory barrier instructions consume a lot of CPU cycles. Thus, many concurrent GC (garbage collection) implementations employ the incremental update GC, which can easily be implemented without locks or memory barriers. However, typical implementations of the incremental update GC stop the mutators while marking objects that should have been marked at the end of the mark phase, and thus are not suitable for realtime applications. In this paper, we propose a concurrent GC based on the snapshot GC, which does not need this operation of stopping the mutators. Our GC reduces the number of locks and memory barriers in the way that each mutator stores pointers that have been overwritten and passes those pointers to the collector en masse. We implemented this GC in Dalvik VM and evaluated it.
This paper aims to clarify factors related to software maintenance efficiency, to suppress maintenance cost of a user company. We defined maintenance amount per engineer (the number of maintained programs / the number of engineers) as maintenance efficiency, and analyzed the cross-company dataset of software maintenance collected by the Economic Research Association. A multiple regression analysis showed that the status of maintenance process standardization is the factor which can be controlled by the user and has relationship to maintenance efficiency. Maintenance process standardization is expected to make maintenance efficiency about 8 times better (at least about 2 times better, and at most about 35 times better).
Structured overlay networks, known for the application to distributed hash tables, provide searching for nodes without any centralized indexes in large scale networks. However it is hard to handle complex queries represented by partial match queries. There are past studies trying to realize partial match retrieval on structured overlay networks such as dividing keys into n-grams. However, those studies have serious problems e.g. affected by the bias of the n-gram frequency. In this paper, we propose a new partial match retrieval method which combines Skip Graph, a structured overlay network supporting range queries, and Suffix Array, a useful data structure in string manipulation. We report on the results of simulation experiments and describe that the proposed method can handle partial match queries within O(log N) overlay routing hops for a network of N nodes. We also show that the proposed method is free from various problems including load imbalance among nodes.
Finding code clones in the open source systems is important for efficient and safe reuse of existing open source software. In this paper, we propose a novel search model, open code clone search, to explore code clones in open source repositories on the Internet. Based on this search model, we have designed and implemented a prototype system named OpenCCFinder. This system takes a query code fragment as its input, and returns the code fragments containing the code clones with the query. It utilizes publicly available code search engines as external resources. Using OpenCCFinder, we have conducted several case studies for Java code. These case studies show the applicability of our system.
Debugging failing test cases, particularly the search for failure causes, is often a laborious and timeconsuming activity. With the help of spectrum-based fault localization developers are able to reduce the potentially large search space by detecting anomalies in tested program entities. However, such anomalies do not necessarily indicate defects and so developers still have to analyze numerous candidates one by one until they find the failure cause. This procedure is inefficient since it does not take into account how suspicious entities relate to each other, whether another developer is better qualified for debugging this failure, or how erroneous behavior comes to be. We present test-driven fault navigation as an interconnected debugging guide that integrates spectrumbased anomalies and failure causes. By analyzing failure-reproducing test cases, we reveal suspicious system parts, developers most qualified for addressing localized faults, and erroneous behavior in the execution history. The Paths tool suite realizes our approach: PathMap supports a breadth first search for narrowing down failure causes and recommends developers for help; PathFinder is a lightweight back-in-time debugger that classifies failing test behavior for easily following infection chains back to defects. The evaluation of our approach illustrates the improvements for debugging test cases, the high accuracy of recommended developers, and the fast response times of our corresponding tool suite.
In object-oriented programs, access modifiers are used to control the accessibility of fields and methods from other objects. Choosing appropriate access modifiers is one of the key factors for easily maintainable programming. In this paper, we propose a novel analysis method named Accessibility Excessiveness (AE) for each field and method in Java program, which is discrepancy between the access modifier declaration and its real usage. We have developed an AE analyzer - ModiChecker which analyzes each field or method of the input Java programs, and reports the excessiveness. We have applied ModiChecker to various Java programs, including several OSS, and have found that this tool is very useful to detect fields and methods with the excessive access modifiers.
This paper proposes a monitoring method for information retrieval from the sources in large-scale networks, which tries to achieve the maximum gain of user utility with the minimum source observation cost. Generally, information accumulated in a network is being updated every second and the contents which have been downloaded by network users become obsolete as time passes. User utility, which is obtained by the users when they get their target information successfully, declines with the elapsed time from the instant of placing a retrieval request or the content's renewal. Accordingly, the proposed monitoring method adjusts its observation intervals according to the information sources' update intervals considering user utility decrease and monitoring cost caused by the observation of sources. The usefulness of the proposal is confirmed by computer simulations, which illustrate that it is effective especially in the condition that the expense of the observations is neither extremely emphasized nor neglected. In addition, a prototype of the monitoring system which implemented the proposal is developed and some monitoring experiments in the real Internet environment are conducted. The results show that the proposed method seems to be available in the case where the conditions are close to those of the simulations.
August 28, 2017 There had been a service stop from Aug 28‚ 2017‚ 1:50 to Aug 28‚ 2017‚ 10:08(JST) (Aug 27‚ 2017‚ 16:50 to Aug 28‚ 2017‚ 1:08(UTC)) . The service has been back to normal.We apologize for any inconvenience this may cause you.
July 31, 2017 Due to the end of the Yahoo!JAPAN OpenID service, My J-STAGE will end the support of the following sign-in services with OpenID on August 26, 2017: -Sign-in with Yahoo!JAPAN ID -Sign-in with livedoor ID * After that, please sign-in with My J-STAGE ID.
July 03, 2017 There had been a service stop from Jul 2‚ 2017‚ 8:06 to Jul 2‚ 2017‚ 19:12(JST) (Jul 1‚ 2017‚ 23:06 to Jul 2‚ 2017‚ 10:12(UTC)) . The service has been back to normal.We apologize for any inconvenience this may cause you.
May 18, 2016 We have released “J-STAGE BETA site”.
May 01, 2015 Please note the "spoofing mail" that pretends to be J-STAGE.