This tutorial describes an overview of mechanism design theory, which investigates a rule or a protocol for multiple agents to make a social decision. The theory models the decision problem as a games of incomplete information, where each player cannot directly observe her opponents' types. Under the assumption that each agent behaves so as to maximize her individual utility, mechanism design theory analyzes how a player chooses her action under a mechanism, while designs a mechanism to achieve a socially desirable outcome or a goal of the designer. This tutorial first briefly explains games of incomplete information and the concept of mechanism design. Then, as a typical example, we focus on auctions that sell a single item and explain several theoretical results on mechanism design theory.
Despite the long history of software evolution study and experience, it is difficult to evolve software properly even now and many systems exhibited system failure immediately after their evolution. To understand the essence of this problem and to overcome it, we considered fundamental principles and techniques for software evolution. The concept of Process Driven Evolution has been proposed and, based on it, evolution methodologies and technical challenges are discussed.
In recent years, the research of technologies called Privacy Preserving Data Mining(PPDM), which aims to balance Privacy Protection and Data Utilization, has become active. One of the most important topics in this field is privacy notion, that is, discussing what has to be protected to say that one's privacy is preserved well. Among such privacy notions, Differential Privacy, which was proposed in 2006 by Dwork, attracts the most attention. This paper gives an overview of PPDM and introduces Differential Privacy briefly.
Formal specifications have been attracting more and more attentions, as software plays the key roles in the society and its development is strongly required to be efficient and reliable. Formal specifications eliminate unambiguity and incompleteness through description based on languages with rigorous syntax and semantics, as well as scientific, systematic analysis and verification. This paper provides introduction to fundamental approaches in formal specifications. This paper specifically presents an example description with B-Method, a method to derive “programs proved to be correct” from abstract specification models. This paper also provides overview of other methods, application cases, research studies and recent trends.
Recently, new network services in the Internet have been proposed and studied, which use special information obtained from a router or a gateway. Although Layer-7 inspection software on a gateway is available, existing inspection software does not support application protocols for providing search and extraction of information, such as HTTP/1.1 gzip encode and chunk encode processing. In this paper, an open source software, SLIM (Smart Linux Interface Monitor) was implemented and evaluated. It provides TCP stream re-construction function and the HTTP/1.1 processing for supporting string extraction from Linux eth devices and pcap files using libpcap libraly. SLIM implements a TCP stream re-construction algorithm based on context-switch processing in order to reduce the required amount of memory. Simulation results show that SLIM achieves 21.3Mbps processing at a gateway, and when directly reading pcap files, it provides 86.8Mbps for storing PostgreSQL and 1.12Gbps for directly storing files. SLIM can analyze a 1.5TB enterprise traffic file and hundle 730,000 connections with 5.87GB memory consumption in offline mode. We confirmed that SLIM maintains its stable operation on a Laboratory gateway over three months.
In order to investigate the impact of reviewers' experience and review time to comprehension level of source code in software evolution, we conducted an experiment with software development practitioners using source code written in Java along with four modification tasks. Each task had a change specification and corresponding modified source code. The subjects were asked to judge whether the modified source code meets the corresponding change specification. The subjects recorded the time to review and their reasons for the judgments. Each reason for the judgment was categorized into one of the three comprehension levels. The results of 66 subjects showed that there was no statistically significant difference among comprehension levels and time to review. The results of 65 subjects showed that experience with review had large impact on comprehension level.
We developed a framework for implementing scripting language profiler easily. Our framework enables users to construct a real time profiler for multiple scripting languages and add measurement targets. For convenience, our framework provides functions to obtain profile information from remote hosts and context information. In this paper, we describe the design and implementation of our framework. We also show some reference implementations of a profiler with our framework, and a practical profiling example with a real-world application. We conclude that our framework is capable for practical use.
In this paper, we explain Copris (Constraint Programming in Scala) system which is developed as a Domain-Specific Language (DSL) for constraint programming embedded in Scala programming language. Copris is designed to help Scala programmers to be able to easily solve Constraint Satisfaction Problems (CSP) and Constraint Optimization Problems (COP), and offers richer description power than existing CSP languages, such as JSR-331, a standardized constraint programming API for Java. Copris also provides a high performance constraint solving since constraint solver Sugar is used as its backend which won at global constraint categories of the international CSP solver competitions in two consecutive years. In this paper, we explain the effectiveness of using Copris by showing some example programs after describing the design overview of Copris DSL.
Diameter Base Protocol is a protocol for AAA (Authentication, Authorization, and Accounting), which was designed as a successor of RADIUS. For specific AAA purposes, several Diameter Applications are defined on Diameter Base Protocol. Diameter EAP Application is one of Diameter Applications that aims at network access control. EAP (Extensible Authentication Protocol) is a generic authentication protocol that supports several authentication methods called EAP methods. EAP-TTLS is one of EAP methods. EAP-TTLS is a superior authentication method that achieves strong security and is easy to deploy. This paper implements the first open source of EAP-TTLS server that runs on Diameter EAP Application. Our implementation supports four main authentication methods (PAP, CHAP, MS-CHAP, and MS-CHAPv2). As a result of working test, it was made sure that our EAP-TTLS server could authenticate several terminals using Windows, Linux, iOS (iPad), and Android. The measurement results show that the authentication time is short enough for practical operation. In addition, this paper describes the details how to implement EAP-TTLS on Diameter EAP Application as one of EAP methods. It also describes the details how to implement authentication methods in EAP-TTLS server. One of the purposes of this paper is that this paper becomes a guide for those who implement another EAP method on Diameter EAP application and those who implement another authentication method on EAP-TTLS.
The state-of-the-art SAT solvers have many successful results in the research of software and hardware verification, planning, scheduling, constraint satisfaction and optimization, and so on. GlueMiniSat 2.2.5 is a SAT solver based on literal blocks distance (LBD) proposed by Audemard and Simon which is an evaluation criteria to predict learnt clauses quality in CDCL solvers. The effectiveness of LBD was shown in their SAT solver Glucose 1.0 at SAT 2009 competition. GlueMiniSat uses a slightly modified concept of LBD, called pseudo LBD, and has an aggressive restart strategy to promote the generation of good learnt clauses. GlueMiniSat shows good performance for unsatisfiable SAT instances. In SAT 2011 competition, GlueMiniSat took the first and second places in sequential SAT solvers for UNSAT and SAT+UNSAT classes of the application category, respectively. Moreover, GlueMiniSat won the second place for UNSAT class in the category including parallel SAT solvers.
These days, memory protection is an important feature for embedded and real-time systems. Some embedded processors, therefore, have a memory protection unit (MPU), which is a hardware designed exclusively for memory protection. A MPU has limited number of region to set memory protection attributes. So when using a MPU, it needs to perform memory layout statically so that memory regions having the same memory protection attribute are located successively. In this paper, we describe TOPPERS/HRP2 kernel, which is a real-time OS with memory protection. HRP2 kernel performs memory layout statically in static configuration. Then HRP2 kernel supports memory protection using a MPU. We evaluated the overhead caused by memory protection in HRP2 kernel, by comparison with a real-time OS without memory protection.
Dual-OS communications allow a real-time operating system (RTOS) and a general-purpose operating system (GPOS)—sharing the same processor through virtualization—to collaborate in complex distributed applications. However, they also introduce new threats to the reliability (e.g., memory and time isolation) of the RTOS that need to be considered. Traditional dual-OS communication architectures follow essentially the same conservative approach which consists of extending the virtualization layer with new communication primitives. Although this approach may be able to address the aforementioned reliability threats, it imposes a rather big overhead on communications due to unnecessary data copies and context switches. In this paper, we propose a new dual-OS communications approach able to accomplish efficient communications without compromising the reliability of the RTOS. We implemented our architecture on a physical platform using a highly reliable dual-OS system (SafeG) which leverages ARM TrustZone hardware to guarantee the reliability of the RTOS. We observed from the evaluation results that our approach is effective at minimizing communication overhead while satisfying the strict reliability requirements of the RTOS.
In this paper, we present a novel binary analysis method for malware which combines static and dynamic techniques. In the static phase, the target address of each indirect jump is resolved using backward analysis on static single assignment form of binary code. In the dynamic phase, those target addresses that are not statically resolved are recovered by way of emulation. The method is generic in the sense that it can reveal control flows of self-extracting/obfuscated code without requiring special assumptions on executables such as compliance with standard compiler models, which is requisite for the conventional methods of static binary analysis but does not hold for many malware samples. Our current attempt for using a hyperviser monitor as a dynamic analyser is also presented.
Multiprocessor systems are recently making advancements into the embedded systems field. The requirements of embedded systems differ from system to system. Some systems require real-timeliness, others emphasize throughput, and some require both properties at the same time. The existing RTOS for embedded multiprocessor systems satisfies a demand of either. We designed and implemented it to satisfy both demands as the TOPPERS/FMP kernel. In order to support a load balancing algorithm fitted for each system without losing real-time property, the TOPPERS/FMP kernel provides the capability to make a task migrate from the current processor to another on-demand by the application using a system call rather than by automatic load balancing of the RTOS kernel. The paper highlights the design and implementation of the task migration functionality. We have confirmed that the task migration functionality enables programs at the application level to realize several kinds of load balancing algorithms.
Graphic Processing Unit (GPU), which was entirely used for image processing, has been widely applied to general computation called GPGPU. Even several developing environments are already provided, software developing cost remains high. Implementation of GPGPU program of the target algorithm exploiting parallelism requires not only realization of the target algorithm, but also knowledge of architecture such as memory hierarchy. To provide support for parallel programming with GPGPU, we propose ParaRuby, which is a distributed GPGPU framework using Ruby. The framework enables programmers to implement GPGPU program with Ruby and to execute the program on multiple remote nodes. This paper reports several evaluations of the application implemented on the framework and discusses about performance and programmability.
In requirements elicitation using goal-oriented requirements analysis method, it is important to detect conflict between contribution and validity on customers' needs. However, no studies of how one can detect this conflict have been reported. In this paper, we propose an algorithm to detect every goal which has this conflict for any goal graph. This algorithm is built on depth-first search. Also, we make a computer program implemented this algorithm, and apply it to a tentative goal graph. In the result, we confirm that it functions correctly.
This paper provides a novel framework for constructing a parser to process and analyze texts written in a “semi-formalized” language or “partially-structured” natural language. In many projects, the contents of document artifacts tend to be described as a mixture of the text constructs following specific conventions and the parts written in arbitrary free text. Formal parsers, typically defined and used to process a description with rigidly defined syntax such as program source code are very precise and efficient in processing the former part, while parsers developed for natural language processing (NLP) are good at interpreting the latter part robustly. Combining these parsers with different characteristics makes it more flexible and practical to process various project documents. Unfortunately, conventional approaches to constructing a parser from multiple small parsers were studied extensively for only formal language parsers and not immediately applicable to NLP parsers due to the differences in the way the input text is extracted and evaluated. We propose a method to configure and generate a combined parser by extending an approach based on parser combinators, the operators for composing multiple formal parsers, to support both NLP and formal parsers. The resulting text parser is based on Parsing Expression Grammars, and it benefits from the strength of both parser types. We demonstrate an application of such combined parsers in practical situations and show that the proposed approach can efficiently construct a parser for analyzing project-specific documents.
In software development projects, large gaps between planned development process and actual development exist. For example, the process of the requirement repeated definition by the demand of a sudden customer in the design process. We call the added process a fragment process. Although complicated development processes may influence product quality, conventional researches did not clarify relationships between process quality and product quality. Therefore, we propose a metric of process complexity focusing on concurrent processes in order to clarify relationships between process quality and product quality. Process complexity is a value that can quantitatively measure modifications of an original development process. In 8 industrial projects, we investigated relationships between values of the process complexities and product qualities. As a result, correlation coefficient between process complexities and important failure is 0.786. In addition, we show a case study of process complexity in order to predict post-release product quality.
This paper presents a technology that enables the watching of videos at very high speed. Subtitles are widely used in DVD movies, and provide useful supplemental information for understanding video contents. We propose a “two-level fast-forwarding” scheme for videos with subtitles, which controls the speed of playback depending on the context: very fast during segments without language, such as subtitles or speech, and “understandably fast” during segments with such language. This makes it possible to watch videos at a higher speed than usual while preserving the entertainment values of the contents. We also propose “centering” and “fading” features for the display of subtitles to reduce fatigue when watching high-speed video. We implement and publish a versatile video encoder that enables movie viewing with two-level fast-forwarding on any mobile device by specifying the speed of playback, the reading rate, or the overall viewing time. The effectiveness of our proposed method was discussed and demonstrated in an evaluation study.
William James, the noted psychologist and philosopher, believed that smiling has a positive effect on our mind. James' view, which was confirmed by several psychological studies, was that we become happier when we laugh. In this paper, we propose a new digital appliance that encourages the act of smiling in our daily lives. This system is designed for people who may not always realize when they are in low spirits and/or have difficulty with smiling. In addition, we believe that this system will foster casual conversation and prompt communications with other people. Our appliance, called the HappinessCounter, combines visual smile recognition, user feedback, and network communication. We conducted trials of the HappinessCounter system. The system had positive effects on user's mood and prompted communication among family members, thereby increasing their positive mood as well.
In this paper, AirSketcher, a novel electric fan that enables the user to directly control the directions and paths in which wind is blown, is presented. AirSketcher is a robotic fan with two servomotors to control its orientation and an embedded camera. We introduce three techniques that allow the user to control and design wind direction and path: 1) AirWand: Drawing the wind path by moving visual markers through the air in front of the fan, 2) AirCanvas: Drawing the wind path on a tablet display that shows the camera view, and 3) AirFlag: Putting control markers in the environment to specify the manner in which wind is to be blown to the areas where the markers are placed. These techniques are described in detail, and their strengths and limitations are discussed.
Conventional context-aware systems normally use accelerometers and gyroscopes, and it is difficult to recognize contexts such as having a meal, or going to the toilet. We propose a new context recognition method based on scent using a wearable scent sensor. Since our algorithm considers the characteristics of scent, it recognizes contexts that find difficult to recognize by conventional sensors. Evaluation results confirmed that scent sensors identify having a meal with 94% accuracy, visiting a rest room with 97% accuracy. Furthermore, we implemented context-aware systems such as a life-log system for healthcare.