Information- (or Content-) Centric Networking (ICN/CCN) uses “content name” as the communication identifier and enables users to obtain content from caching routers in the network. This future Internet technology provides content for users rapidly and effectively. In this paper, we present the concept and characteristics of ICN/CCN and introduce the prototype implementations as well as the evaluation techniques such as simulator, emulator, and testbed. We then discuss the research direction toward the ICN/CCN deployment.
This paper proposes a method to estimate malicious domain names from a large scale DNS query response dataset. The key idea of the work is to leverage the use of DNS graph that is a bipartite graph consisting of domain names and corresponding IP addresses. We apply a concept of Probabilistic Threat Propagation (PTP) on the graph with a set of predefined benign and malicious node to a DNS graph obtained from DNS queries at a backbone link. The performance of our proposed method (EPTP) outperformed that of an original PTP method (9% improved) and that of a traditional method using N-gram (40% improved) in an ROC analysis. We finally estimated 2,170 of new malicious domain names with EPTP.
Improving availability and throughput is a significant challenge for data center networks. Recent studies have attempted to use a variety of routing and multipathing techniques. However, no method has yet managed to combine availability and throughput improvement with actual deployability, usually because of dedicated hardware requirements. In addition, recent approaches lack availability for topology changes because of requiring pre-configured topology. In this study, we propose a method to construct efficient commodity-based random graph topology network by using an IP tunneling technique. The proposed method only requires minimal modification for software network stack on end hosts. In addition, we designed and implemented a control plane system for proposed method using OSPF. Moreover, the evaluation shows that our proposed method enables throughput improvement of a network and high availability with commodity hardware switches.
A data infrastructure system is composed by two sub systems. They are an SQL query processing system and a transaction processing system. A conventional database management system (DBMS) involves both of them, while recent activities focus to improve each subsystem for high efficiency. This paper explains recent advances on them.
Open source software (OSS) becomes essential for not only individual developers and academia, but also enterprises. Enterprises can benefit from using OSS for developing software products. However, using OSS in enterprises has a lot of problems regarding quality assurance, license compliance, version mismatch, and so on. In this paper, we introduce several case studies of software developments using OSS in Fujitsu and Fujitsu Laboratories. Furthermore, we discuss benefits and problems for using OSS in enterprises and describe expectations for the future on the OSS engineering.
Programmers often use keyword-based code search tools to find code fragments to be changed simultaneously. However, there are many code fragments not to be changed in search results. Such code fragments cause programmers making mistakes. In order to prevent programmers from making mistakes, we investigated reordered results of a keyword-based code search tool using code clones and logical couplings. We have applied the two reordering approaches to three software systems in a company. As a result, we confirmed that code fragments required changes were reordered to the higher rank in the case of using code clones. However, code fragments required changes were occasionally not reordered to the higher rank in the case of using logical couplings.
In recent years, automated program repair by reusing existing source code is attracting much attention. In insert operation of the automated program repair, a source code line is selected from existing source code and it is inserted to a faulty code region. However, only a limited number of bugs can be fixed by reusing code lines that exist in the source code. In this research, we propose two approaches to fix more bugs by the automated program repair. The first approach is reusing code lines from a large dataset of source code. The second approach is normalizing variable names. In this paper, we examine how much bugs can be fixed by the automated program repair. To evaluate the approaches, we examine a ratio of code lines reusable from existing source code to all inserted code lines in bug fix commits. The investigation results on 5 software repositories showed that the first and second approaches increased the ratio to 43–59% and 56–64% from 37–54%, respectively. In the case where we used both the approaches, the ratio increased to 64–69%.
Tree automata completion is a popular method for reachability analysis over term rewriting systems, which has many applications such as confluence analysis and normalizing strategy analysis. The point of this method is to ensure termination of the completion procedure, and it is known that the completion procedure terminates for the classes of growing term rewriting systems and finite path overlapping term rewriting systems. In this paper, we propose a new class of term rewriting systems, named non-left-right-overlapping term rewriting systems, which is incompatible with the classes of growing systems and finite path overlapping systems. Analyzing the overlapping relation between the left-hand side and the right-hand side of the rewrite rules, we give a sufficient condition for termination of the tree automata completion procedure for non-left-right-overlapping term rewriting systems. The reachability problem is decidable for the class of term rewriting systems satisfying this sufficient condition.
A linear-time reversible self-interpreter in an r-Turing complete reversible imperative language is presented. The proposed imperative language has reversible structured control flow operators and symbolic tree-structured data (S-expressions). The latter data structures are dynamically allocated and enable reversible simulation of programs of arbitrary size and space consumption. As self-interpreters are used to show a number of fundamental properties in classic computability and complexity theory, the present study of an efficient reversible self-interpreter is intended as a basis for future work on reversible computability and complexity theory as well as programming language theory for reversible computing. Although the proposed reversible interpreter consumes superlinear space, the restriction of the number of variables in the source language leads to linear-time reversible simulation.