Recently, even embedded systems become to be developed in Java for its advantage on reliability and for its ability to speed up development time. In this area, as many applications require quick responses to events, Java is required to be equipped with a real-time GC, which does not suspend the application for a long time. In this paper, we show a full picture of a real-time GC, using Metronome, which is a well-known real-time GC, as an example. In addition, we introduce the details of techniques for real-time GC on several topics.
This article describes model checking techniques that use satisfiability solving. Model checking is an automatic verification method which determines whether a given property holds or not by exploring the state space of a system. Satisfiability solving-based model checking can enjoy the recent rapid performance improvement of SAT or SMT solvers. In this article we first describe the basics of model checking and of satisfiability solving. Then we explain how program's source code can be model checked using satisfiability solving. Next we focus on a more general class of systems and describe bounded model checking for it. This technique is bounded in the sense that the state search is limited to the space reachable within a fixed number of transitions. We also show how this limitation can be removed to allow verifying the whole behavior of the system.
Dynamic analysis is a category of program analysis techniques that analyze execution traces of a program to understand the behavior and the performance of the program. This paper explains several applications of dynamic analysis such as code coverage analysis and statistical debugging and their effectiveness. The paper also presents several available tools to analyze Java programs.
Software process modeling (i.e. describing explicit rules/constraints for the execution of various activities in software development) is very important to form a basis of quantitative management, qualitative evaluation, and improvement of software process. This paper overviews research activities in software process modeling and its application to process management and improvement basing the latest research direction. We also briefly introduce standards and reference models related to process management and process improvement.
Under the Web 2.0 movement, situational applications are now developed by ‘mushup’ technique. Then, open standard protocol to authorize delegeted access by a web servise to protected resoueces on other web server became required. We will explain about the evolution prcess of OAuth protocol specification. Viewpoints of security and that of easy implementation are hostiling in the standardization.
This paper provides a systematic review of studies related to software fault prediction with a specific focus on types of metrics. The research questions on this paper are: what types of metrics are used for studies related to faults, and is there trend in proposal of fault-related metrics. The review uses 63 papers in 2 journals and 5 conference proceedings published in 2000–2010. According to the review results, we found that coderelated and process-related historical metrics had been used mainly since 2005, and organization-related and geographic metrics had been used since 2008.
In information science area, the research of supporting intellectual and creative activity is essential. In this thesis we introduce some researches about cognitive science, social psychology, behavioral economics and so on. Some discoveries in those researches may be useful to reconstruct cognitive tools that support creative activity of human. We introduce some examples of cognitive tools that are based on human perception and cognition. From the discussion of those researches and discoveries, we predict the future of cognitive tools and human-computer interaction.
Due to high flexibility in software design, it is important to extract and reuse repeated problems and solutions as design patterns in order to improve design efficiency and consistency. In this paper, we report the result of the survey on engineering researches on design patterns in object-oriented software development. Targeted researches include the design pattern application, detection and verification.
Most of the conventional implementations of regular expressions are based on backtracking. Such implementations are slow in the worst case, and thus, we would like to develop a better matching algorithm. However, it is nontrivial to provide an efficient matching algorithm that can deal with practical extensions including submatch addressing. This paper studies regular expression with lookaheads and negative lookaheads, abbreviated to REwLA. First, we propose a transformation from a REwLA of size m to a deterministic finite automaton of O(22m) states. Next, we consider weighted regular expressions, which enable us to calculate submatch addressing. We propose a transformation from a weighted REwLA of size m to a weighted nondeterministic finite automaton of O(22m) states.
What we call “generate-test-α” is a computation pattern in which we do some extra computation, such as choosing the optimal solution, after the usual generate&test computation that enumerates all solutions passing the test. A naive parallel algorithm of the generate-test-α can be given as a composition of parallel skeletons, but it will suffer from a heavy computation cost when the number of generated candidates is large. Such a situation often occurs when we generate a set of substructures from a source data structure. It is known in the field of skeletal parallel programming that a certain class of simplified computation without test phases can be given efficient linear cost algorithms by making systematic transformations exploiting semirings. However, no transformation is known as yet to optimize the generate-test-α computation uniformly. In this paper, we propose a novel transformation to embed the test phases into semirings so that generate-test-α computation can be transformed into a simplified generate-α computation. This transformation allows us to reuse efficient parallel algorithms of generate-α for the generate-test-α computation. In addition, we give powerful optimizations for a class of generate-α computations, so that we can give uniform optimizations for a wide class of generate-test-α computations.
Numbers of methods have been proposed to guarantee polynomial time computability of programs represented by term rewriting systems. Marion (2003) proposes the light multiset path ordering to guarantee polynomial size normal forms and shows that in term rewriting systems which can be oriented by this ordering any term can be evaluated in polynomial time. It is also shown that any polynomial time computable function can be encoded by term rewriting systems that can be oriented by this ordering. In general, however, there are term rewriting systems whose normal forms can be evaluated in polynomial time but which can not be oriented by this ordering. Thus a more general path ordering which guarantees polynomial time normal form is preferred. In this paper, we give an extension of the light multiset path ordering so that polynomial size normal form is guaranteed for more general class of term rewriting systems.
This paper describes a lightweight first-class overloading scheme for ML-style functional programming language. On this scheme, overloaded functions are first-class citizens and have polymorphic types with a type kind which denotes the set of instances of the overloaded functions. The type system and the compilation algorithm of this scheme is designed as a small and natural extension to a polymorphic record calculus and its compilation, so it is easy to extend an existing practical programming language and its full-scale compiler with this scheme if the language includes the polymorphic records. The scheme reported in this paper has been implemented in the SML# compiler, which is made available as an open source software.
Infinitary term rewriting has been proposed to model functional programs that deal with virtually infinite data structures such as streams or lazy lists. Strong head normalization is a fundamental property of infinitary term rewriting systems and methods for proving this property have been proposed by Zantema (2008) and Endrullis et al. (2009). Endrullis et al. (2010) have proposed a class of infinitary term rewriting systems—stream term rewriting systems—and they have given a decision procedure of the productivity of streams for a class of stream term rewriting systems. In this paper, we present procedures for disproving these two properties of infinitary term rewriting systems—the strong head normalization and the productivity. The basic idea of our procedure is to construct rational counterexamples which are infinitary terms but have finite representations. The correctness of our procedures is proved and an implementation is reported. Our experiments reveal that our procedures successfully disprove the strong head normalization and the productivity automatically for some examples for which no automated disproving procedure is known.