We implemented a portable database garbage collection system(DBGC) running on Hibernate. Portability of this system is realized by using Hibernate as a middleware, which absorbs the differences of DB management systems. We get the reachability of objects which is necessary for realizing GC from metadata of table relations Hibernate holds. We adopt Yuasa's snapshot-at-the-beginning algorithm as the GC algorithm. Marking is executed as set operations by SQL. We adopted integer value as marking flag, so sweep phase can be executed at any time independent of usual mark-sweep sequence. Information of newly added objects and pointers which are changed can be obtained by peeking Hibernate's object observation system. DBGC can be executed in parallel with DB services controlling integer typed marking flag.
It is effective to debug software executed in multi processors by analyzing a trace log, but it is inefficient that developers analyze the trace log directly. Therefore, some visualization tools for the trace log had been developed before now, but there are two problems, lacks of general versatility and expandability. To that end, we developed TraceLogVisualizer(TLV), a visualization tool for the trace log that solved those problems. In this paper, we will discuss requirements to achieve general versatility and expandability, and how we achieved them.
This paper presents an open source simulator corresponding multiprocessors for embedded systems. The purpose of the simulator is to effectively develop a real-time OS and application for multiprocessors. To realize the multiprocessor simulation environment, an ISS (Instruction Set Simulators) for a single processor is extended with few modifications and cooperate with other ISSes. There are four newly-developed mechanisms between ISSes as follows: (1) a shared memory mechanism, (2) an exclusion control mechanism, (3) an interrupt mechanism, and (4) a synchronism mechanism. Moreover, the method of reducing simulation time to use the idle time of the real-time OS is proposed. It is possible to adapt this environment to other kinds of ISSes.
Since the scale of experiment targets and testbeds used for network simulation and emulation are increasing, the ability of tools for network experiment is becoming more important to reduce humane efforts and boost automation. One of the tools which helps realize that goal is programming language and it is also the main focus in this study. Traditionally, small and various scripts in different languages have been used separately during the experiment. This makes the management work harder because users may have several scripts in different languages and they may forget about done or undone jobs. To make a unified and effective framework, our approach solves those shortcomings by building one script for the whole experiment based on one language from top to bottom. The language that we developed has network-oriented features that make users free from insignificant routine works in experiments such as network construction and power status control. Moreover, the language has the communication feature between nodes to support messages and arguments passing. In conclusion, the language and its processing system transform the conduct of experiments to a much simpler work — running a script.
Distributed hash tables (DHTs) which provide scalable key-value store and lookup sevices are methods to construct Peer-to-Peer(P2P) networks. However, existing algorithms of DHTs don't consider problems caused by NATs. We must consider NATs which are very serious hindrance for P2P networks when designing and implementing P2P network applications, but typical algorithms and implementations don't consider NATs sufficiently. Thus, we design a DHT which works well in environments involving NATs and implement a reference code as library to prove our proposals. Since the source code is distrubuted under BSD license on the Internet, anyone can use and modify it freely.
In the last few years, Rich Internet Application (RIA) that introduces the user experience of desktop applications while keeping the advantages of current web application is becoming popular based on high-speed network and diversified methods for the connections. Unlike traditional web applications, RIA are mainly working on client side and invoke services located on server side as needed. That is, main functionalities such as validation, screen transition and business logic are processed at client side. This paper describes the design and implementation of framework that helps RIA developers to divide processes at client side into functional units and develop each function independently. This framework divides the functionalities working on client into three units, including validation of input values, screen dependent processes and shared processes and makes it possible to work these functionalities as a single application by using event model and naming convention instead of programming codes for binding them. This framework provides coherent RIA development style with a number of RIA developers.
We have developed an interactive education environment called “p-HInT”. A purpose of the p-HInT is improvement of speech style lectures with 200 or more attendees at non-computer classrooms. The most feature of p-HInT is usage of mobile game terminal NINTENDO DS(R) with Opera browser. Each attendee brings a DS to a lecture. After the attendee log-ins p-HInT system through DS, a teacher can see an attendees' list with sit-down positions. In addition, after the teacher does tests through DS, the teacher can immediately see results of the tests. By using p-HInT system with DSs, because teachers can quickly grasp attendees' understanding levels, the teachers can improve then and there the lecture according to the attendees' understanding levels. In developing p-HInT system, we have novel approaches for assigning access points and tuning access points, reducing communication traffic, and managing session information of clients. As a result of adapting lectures to our regular curriculum, we confirmed that results of essaytype tests in lectures using p-HInT are better than those in normal lectures(by t-test, siginificant level 5% ). Moreover, lecture styles have been improved because of decreasing private talking of students, miscellaneous tasks. In addition, we confirmed that students feel more familiar with a teacher by using a student voice function of the p-HInT even if the number of students is 200 or more.
We describe NCAP, a new network capturing tool for distributed sensor systems. NCAP operates on messages rather than on packets, and so performs full IP reassembly at the point of measurement. The resulting data can either be managed as files or be transmitted as encapsulated UDP datagrams either unicast or multicast. The NCAP library is highly portable with C and Python interfaces, and has a plug-in mechanism whereby analysis logic can be written discretely and without regard to the handling of encapsulated datagrams or files. The primary application of NCAP is the Security Information Exchange, where cooperating distributed sensor operators now submit captured DNS traffic to a centralized location for subsequent long-running analysis. Examples of value added reprocessing and rebroadcast will be shown, as well as samples of captured traffic and of possible security problems illuminated by our analysis. These results will show that NCAP makes it possible to capture, share, and analyze live network data on a larger scale than has ever been done.
GXP is a parallel shell designed for supporting a range of purposes from system managements to parallel processing with a small installation effort. Its early prototype was developed at the end of 2003 and, after twice of complete rewrites, ver.3 is currently published as an open source software. While it shares the basic function with other parallel shells—executing the same command line on many hosts, it achieves a superior speed (response time) and scalability. Its design is also significantly more extended than similar parallel shells for the purpose of supporting parallel processing in distributed environments. For example, (1) it flexibly works in the presence of firewall and NATs, (2) it works on top of various kinds of underlying remote accessing protocols such as SSH, Sun Grid Engine, TORQUE, and mixture thereof, (3) it needs to be installed only one host and can be used on all hosts, and (4) it supports essential features for “interactive” sessions such as setting environment variables and current directories on remote hosts, and choosing execution hosts flexibly. In addition, it has a built-in support for parallel and distributed execution of make. Thus the user is able to run coarse-grained tasks with dependencies with a minimal effort of writing a Makefile. With all these features, it significantly extends the area of applications compared to existing parallel shells, which mainly target system management of a single cluster. All in all, it is a software to use clusters and distributed environments “comfortably.”
SAGE has received attention as a middleware that allows scientists who use large-scale data to control and drive a Tiled Display, which provides high-quality and high-density visualization with multiple displays. However, SAGE has a problem of poor application operability. This problem is due to the architectural design of SAGE that users have to manage window-related operation and application-related operation from different interfaces. To address this problem, we have developed an application event control module for SAGE. This module unifies the two separated interfaces of window management and application operation. Our design principle is to minimize the current SAGE architecture changes and keep portability of the module. In this paper, we describe the function, design and implementation of our module. To evaluate interactivity and operability of our module, we measured time taken from coming up an event to the completion of event reaching an application and time taken for users to perform pointing operation through SAGE UI. As a result, we confirmed that our module provides realistic interactivity and allows users to smoothly perform pointing operations.
In this paper, we describe a SAT-based constraint solver named Sugar and its performance evaluation. Sugar can solve CSP (Constraint Satisfaction Problem), COP (Constraint Optimization Problem), and Max-CSP by encoding a given problem into a SAT problem and then solving the encoded SAT problem with an external efficient SAT solver (e.g. MiniSat). As for the SAT encoding, a new encoding method named order encoding is used, which is more efficient for various problems compared with the direct encoding and the support encoding which are widely used. This paper also describes the summary of the results at the 2008 Third International CSP/Max-CSP Solver Competitions. In the competitions, Sugar became the winner in four categories out of ten categories.
LMNtal was designed and implemented as a unifying computational model based on hierarchical graph rewriting. In pursuit of its application to the fields of verification and search, the system has recently evolved into a model checker that employs LMNtal as a modeling language, and we have been accumulating experiences with modeling and verification. This paper describes LMNtalEditor, an integrated development environment (IDE) featuring both state space search and visualization of verification, and gives various examples of model description, verification and visualization. LMNtalEditor features the visualization and browsing of state space and counterexamples, the searching of states of interest, and so on. We have successfully run diverse examples taken from the fields including concurrency and AI search, and found that the IDE plays an essential role in understanding the models and counterexamples and thus greatly eases the task of verification. Furthermore, through the encoding of models in Promela, MSR and Coloured Petri Nets into LMNtal, we have extended the expressive power of LMNtal to the field of verification.
Many development tools for P2P applications are proposed, but do not address a problem of connection between unconnectable nodes. The problem is that, IPv4 nodes and IPv6 nodes cannot be directly connected. We propose a development tool with ReverseLink and RelayLink. For example, with our tool, NATed node can connect other NATed node throught a IPv4 Relay node. Results from experiments show that our proposal make a structured overlay network with 10 NATed nodes and 10 IPv4 nodes. However, it takes 3 times as long as a normal overlay network does.
In this paper, we propose an algorithm of type error slicing for type-based information flow analysis of imperative programs. Our slicing method is based on the existing one for functional programming languages. Type-based information flow analysis is useful to detect illegal information leaks. However, it is sometimes difficult to understand the reasons why type checking or inference fails. Program slicing helps programmers to locate the cause of a type error. We implemented our proposed algorithm as a prototype tool and applied to some simple programs.
Authors propose a validation of meta-models using formal specification language Alloy. The scope of model is divided by meta-hierarchy levels. Therefore, the execution of the validation is efficient. The purpose of this paper is the validation of the specification for information and control system description language efficiently.
Multivariate linear regression models have been commonly used as software effort prediction models. To improve the prediction accuracy, it is a common practice to transform (especially, log-transform) the data before building a model, although its theoretical basis is not necessarily clear. This paper reveals that the log-transformed linear regression model (log-log regression model) is equal to the exponential model, which is suitable to characterize various relationships among software related metrics. However, when using a log-log regression model, the result of inverse transformation tends to under-estimate the effort. This paper also introduces a method to correct such bias.
There has been considerable research on Fault-Prone (FP) module prediction using software metrics, and their findings would be useful in the plan for reviewing and testing the modules. Toward a practical application of the FP module prediction methods, this paper focuses on the optimal selection of modules to be preferentially reviewed from a cost-effectiveness standpoint, since practitioners face some real constraints on the development time and cost. The paper considers a fault-proneness of a module to be the worth reviewing the module, and proposes to formulate the optimal selection of modules to be reviewed, as a 0-1 integer programming problem, i.e., knapsack problem. The empirical work using 500 sample modules from NASA IV & V shows the proposed method would be better than conventional one on the cost-effectiveness.