Software Defined Network (SDN) is a new network architecture, which provides its flexible control in order to satisfy the various requirement. This paper shows overview and applications of OpenFlow technology, which is a standard interface designed for SDN. We first explain a standardized OpenFlow switch and protocol specification, and then show its application for datacenter networks. At last, we introduce Trema, which is a framework for developing OpenFlow contollers, and RISE, which is a nation-wide OpenFlow testbed network.
Cryptography is one of basis technologies for safety communication. The cryptography strength depends on a difficulty of the cryptanalysis. The security of most-used RSA is primarily tied to the difficulty of factoring large composite numbers. Of course, the factoring of a challenge-number of specific length does not mean that the RSA cryptosystem is “broken” immediately. However the strength could be gradually compromising due to an epoch-making cryptanalysis or an enhancement in the computation power and so on. This is called the cryptography compromising. Nowadays it is becoming important to recognize this problem in secure society. In this paper, we focus on a factoring cost as a characteristic parameter indicated a risk of the cryptography compromising. Firstly we cover such topics as existing results for factoring. Especially we describe the more realistic factoring cost on the growing Cloud computing environment and then mention about future the risk of cryptography compromising.
This survey paper discusses anomaly detection in Internet backbone traffic. We first briefly explain anomalous traffic harmful to users and network operators, then describe several types of anomaly detection algorithms in the recent literature. Finally, we demonstrate the time evolution of anomalies detected by four different anomaly detectors in Internet backbone traffic over 10 years.
This tutorial introduces a cooperative game theory which is one of main parts in game theory. A cooperative game theory consists of two major research topics. The first topic involves how to divide the value of the coalition among agents. We explain the desirable ways of dividing the rewards among cooperative agents, called solution concepts. The traditional cooperative game theory provides a number of solution concepts, such as the core, the Shapley value, and the nucleolus. We introduce some algorithms for dividing the obtained rewards among agents and show their computational complexities. The second topic involves partitioning a set of agents into coalitions so that the sum of the rewards of all coalitions is maximized. This is called Coalition Structure Generation problem (CSG). We explain efficient constraint optimization algorithms for solving the CSG problem. Furthermore, we introduce concise representation schemes for a characteristic function, since it is likely that we can solve solution concepts/CSG problem more efficiently by utilizing concise representation methods for a game.
The Mining Software Repositories (MSR) is an attractive research field that has tons of research topics. It is great fun for a researcher to mine new research goals from software repositories. The MSR field has been grown up with OSS communities, as researchers have mined repositories of various open source software (OSS) projects until now. In this paper we introduce fun of the MSR field as well as its typical research topics, example of mining and future directions.
It remains important for a development organization to realize high quality software development, but there are few organizations which have achieved it. This paper discusses a method of software development management so-called “Software Quality Accounting” that was originated by NEC Corporation to solve software quality problems. And we explain an actual improvement case that we could decrease the number of post-release defects by improving of previous applications of the Software Quality Accounting. Based on the improvement case, we show that strict application of the Software Quality Accounting and overall software process improvement are required to realize high quality software development.
Recently, many researchers focus on the studies of management and analysis for continuous data such as time series and geographical location data. To manage the large continuous data, the structured overlay network technologies are proposed. In addition, to analyze the large continuous data, MapReduce platforms are proposed. However, it is difficult to analyze the continuous data by MapReduce platforms on overlay. Because general overlay uses hash functions and these hash functions do not preserve the continuousness. In addition, general MapReduce platform generates a lot of communications and synchronous operations in continuous data processing. To handle these problems, we propose the scalable computing platform for continuous data. Our platform achieves the asynchronous computing and high parallel performance for continuous data analysis. In concrete terms, our platform builds the balanced tree based on SkipList for all nodes. This architecture enables each node to manage its children nodes' states, analysis results, and synchronous operations. In addition, our platform can balance load by gathering all nodes' load information. Therefore, our platform realizes high performance MapReduce computing for continuous data.
Intelligent Transportation Systems (ITS) are systems deployed to optimize the road traffic and realize safe, efficient and comfortable human mobility. Cooperative ITS is a new vision of ITS where vehicles, the roadside infrastructure, traffic control centers, road users, road authorities, road operators, etc. exchange and share information based on a common communication architecture — known as the ITS station reference architecture — supporting all types of ITS use cases over a diversity of access technologies (11p, 11n, 3G/4G, infra-red, ...). The building blocks of the ITS station are specified within ISO, ETSI, IETF and IEEE. To promote the deployment of Cooperative ITS and to encourage further research on it, we introduce an open-source software combining IPv6 and GeoNetworking which are two essential building blocks of the ITS station. It comprises the GeoNetworking protocol, the GN6 adaptation sub-layer (GN6ASL) and the test tools of Basic Transport Protocol (BTP). We implemented each module separately to facilitate the analysis and modification of the protocol behavior and provided a library for inter-process communication between the modules to allow extensibility. Our participation to Cooperative ITS plug tests organized by ETSI demonstrates that our implementation complies to the Cooperative ITS standards while a basic performance evaluation shows that overhead from our implementation design is limited.
The domain name system (DNS) has become a key infrastructure on the Internet, which is a tree-structured directory service to look up resources, such as corresponding IP addresses, from domain names. Many services and systems, such as Web services and E-mail systems, deeply rely on DNS. A DNS fault could make a crucial impact on these services and systems. Therefore, evaluating the impact of misconfiguration and the fault tolerance is essential to keep services stable. Moreover, the DNS delegation relationships recently become more complex because the DNS runs on the IPv4 & IPv6 coexisting Internet. In this paper, we employ the DNS lookup graph that represents the domain name lookup procedure as a labeled directed graph to show trends of the DNS lookup path in resolving A and AAAA records with IPv4 and IPv6 transport protocols using the datasets measured before, during, and after World IPv6 Day, and after World IPv6 Launch. The results reveal that these IPv6 events have promoted to add AAAA records to each domain name but most of them cannot be resolved only with IPv6 transport; i.e., IPv4 transport is required to resolve most (about 81%) of the AAAA records.
In this paper, we propose a method to infer regional-level AS topologies and compare the Asian AS topologies in 2004 and 2010. With the Internet being globalized, the global traffic has become increasingly common between two countries. Analysis of AS topology is helpful to understand the macroscopic Internet structure and evolution. But AS topology shows a relationship of AS organizations, so only using existing method is difficult to observe the regional Internet structure changes. Our method focuses attention on the place of BGP routers, infer the places of AS boundaries and mapped geographical information and AS topology data. In result, we show the growth of each Asian countries in Asia. And our method has enabled to observe the regional Internet structures.
In this research, we have developed a Smart Signage System for tablet devices as a new approach for digital signage applications. Tablets can be used in a smart way to suitably display and use interactive contents, and there is a need for research on how to make the most effective use of a large number of tablets. In this study, we consider the challenges of (1) adaptability to diverse network environments, (2) flexible content creation and delivery, and (3) comfortable operation of the system on the devices. The present paper discusses each of these points and proposes an implementation method to tackle these challenges efficiently. In particular, we address the important questions for a digital signage application that are those of the contents model, the display system, as well as the delivery system. We also introduce the pair swipe pairing technique to deliver contents between devices in a wireless environment, and conduct experiments to show its effectiveness.
Pattern matching using regular expression is widely used in computer science, various method has been studied so far. Also a method to convert regular expressions into DFA, is one of them. In this paper, we propose two methods for throughput improvement of pattern matching. The first method is the parallel matching. We defined Simultaneous Finite Automata (SFA) by the natural extension of automata theory, that can be run in parallel against each partition of target string. The second method is dynamic code generation, that generates optimized native code from DFA・SFA at run time. We have implemented the regular expression matcher based on these two methods and it has achieved speedup by a factor of 5.9 on a 6-core environment.
In this article, we propose a novel method to infer preconditions of a program. This method firstly generates a set of predicates from the program text, converts the program code into a logical formula, negates the postcondition given by the user, and conjuncts them all into a formula. Then, our method enumearates (possibly multiple) minimal unsatisfiable cores (or MUC) of the conjunctive formula. Our technique finally extracts proper preconditions from the MUCs. We call them as “quasi-weakest” preconditions in that each of the precondition is the weakest among all the conjunctions of the predicates. We prototyped a tool named SMUCE that realizes the proposed method using CForge, a bounded verifier for C code. Thereafter, we applied the tool to nine C functions implementing textbook algorithms with two postconditions, and compared the generated preconditions with manually-specified ones. The result showed that SMUCE extracted equivalent, or even weaker preconditions than manually-specified ones from ten of the total of 18 programs, indicating the proposed method can infer applicative preconditions in principle.
We propose a hybrid effort estimation method based on the multivariate liner regression analysis and the analogy based estimation method (ABE). First, our method calculates the unreliability index of ABE on an estimation target project. Next, our method selects log-log regression estimation when the value of the index is low. Otherwise our method selects ABE estimation or combined estimation (the average of ABE and log-log regression estimation). In the experiment, we compared estimation accuracies of our method with conventional methods, and the results showed that the median of Balanced Relative Error (estimation accuracy index) was improved from 47.2% to 39.7%, when the variance of similar projects' effort was used as the reliability index.