In computer science, parallel and distributed processing has long been investigated and developed for its applications to social infrastructures. Nowadays, it enhances growing rate of Internet access around the world, which makes a shared knowledge, or a kind of collective intelligence, that emerges from the collaboration and competition of many individuals and appears in consensus decision making in bacteria, nervous systems, animals, humans and perhaps computer networks. The brain is capable of various cognitive functions through similar processes and accomplishes having a self-referential function. We discuss how the brain exhibits a single entity such as the self as a consequence of cooperation among distributed autonomous components and propose a research approach of the brain-based robotics, which investigates the decision-making process in a social context. This synthetic approach may help understanding the brain with consciousness and a key to consider to a parallel and distributed processing with a self-referential property.
Computer simulation is an important means to understand complex networks in similar to Artificial Intelligence as a means to understand the intelligence of life. The article provides an overview of the simulation techniques in the research on complex networks following the major literature and explains the trend of the research is changing from imitation to creation.
One direction of the network research is currently turning to understanding how the structure of social networks determines the dynamics of various types of social process. An important feature of many complex systems, both natural and artificial, is the structure and organization of their interaction networks. In our study, the optimal network topology is generated by using genetic algorithm (GA). We use the weighted fitness function of the maximum eigenvalue of adjacency matrix and the link density. We show various types of networks generated as the optimal networks and we investigate the properties of these. Our network design method enables us to networks which have better performances for diffusion process.
A food web is a highly complex network which defines prey-predator relationship of species. Understanding the structure and functioning of ecosystem by exploring the network topology has long been a central topic of ecological research. This work focuses on the influence of a restriction arising from the trophic level on the network topology of food webs by investigating an evolutionary model that captures the essential features of real-world networks. The results are in good agreement with empirical data and we present a scenario for the evolution of the network.
Many real-world data can be represented as bipartite networks composed of two types of vertices. Paperauthor networks and event-attendee networks are the examples of bipartite networks. Detecting communities from such bipartite networks is practically important for finding similar items and for understanding the structures of the networks. In order to evaluate the goodness of detected communities from unipartite networks, Newman-Girvan modularity is often employed. For bipartite networks, Barber, Guimera, Murata and Suzuki propose bipartite modularities. This paper compares these bipartite modularities in order to understand their properties. Experimental results for synthetic bipartite networks show that (1) computation of Barber bipartite modularity is relatively fast, and (2) accuracies of communities detected by maximizing Suzuki bipartite modularity is relatively better than maximizing other bipartite modularities. In addition, we implemented a fast optimization method for detecting communities in large bipartite networks.
In our study, we investigated the effect of each player's decision making considering the opponent's “cooperativeness” (how cooperative the opponent is) on the evolution of cooperation in various types of networks. We analyzed the evolution of cooperation in the population in the cases where each player “doesn't refer to his/her opponent's cooperativeness”, “refers to his/her opponent's cooperativeness with him/herself” and “refers to his/her opponent's cooperativeness with all of the opponent's neighbors”. As a result of our study, we have clarified the two following facts: (a) In the population where each player refers to his/her opponent's cooperativeness, cooperation doesn't decline so much as an increase of each player's temptation to defect on a cooperator. In general, if each player doesn't consider his/her opponent's cooperativeness, coooperation is hard to promote. However, each player's decision making considering his/her opponent's cooperativeness can restrain the evolution of defection. (b) In particular, on the complex networks such as Small World network and Scale Free network, cooperation is easier to promote in the population where each player considers his/her opponent's cooperativeness with “all of the opponent's neighbors”, than in the population where each player refers to his/her opponent's cooperativeness with “the player him/herself”.
To flexibly coordinate a multi-agent system (MAS) composed of a large number of autonomous agents, we must allot the proper actions to the proper number of agents allocated to the proper areas. We have proposed a trial and error method to design a coordinating system composed of many agents communicating indirectly with diffusing signals, by modifying an agents' behavior rule. In this paper, we improved a simple sorting task model (ant-like robot, ALR) by using this method. We introduced an attracted move toward the signal source and a random move in desensitization into the agent's behavior rule of ALR. We named this signal-diffusion system. The signal-diffusion system operates according to the reaction-diffusion mechanism, and shows higher performance in the sorting task than ALR.
In this paper, we propose a new bipartite modularity which is a measure to evaluate community structure considering correspondence of the community relation quantitatively. The measure has validated by the value distribution of all patterns of community structure on a simple bipartite graph. Furthermore, we proposed the method to extract community structure using proposed measure. We show the availability of the method by applying it to some artificial bipartite graph models.
It is a very important issue to understand exactly the mechanism of the collapse of the giant component in a complex network caused by various types of node removal in order to build up robust networks under a wide range of external disturbances in the real world. In most existing theoretical analyses, however, the structure of networks under consideration is specified only by their degree distribution and the degree-degree correlation between nodes, which is not necessarily small in real-world networks, has not been incorporated. In this article, we study using analytical method the mechanism of the collapse of the giant component in a complex network under the influence of various ways of node removal. The degree-degree correlation is incorporated in the first place of the analysis. Though the derived equations are valid for any types of node removal, we show in this article the results for two specific cases of node removals; the one is random node removal with a given probability and the other is selective node removal in which the highly connected nodes (hubs) are selectively removed. From the results, we find that the networks with assortative degree-degree correlation where the nodes of the same degree tend to connect to each other becomes much more robust against both types of node removal than the networks with the same degree distribution but without any kind of degree-degree correlation. For scale-free networks, in particular, the robustness enhancement effect due to assortative degree-degree correlation is significant and the inherent vulnerability of the scale-free network against selective node removal is considerably improved by introducing assortative degree-degree correlations.
The network of bloggers interconnected by the comment exchange relationship obviously represents an aspect of human relationships. The paper reveals that interrelationships among items (such as products and works of art) can also be inferred based on structural characteristics of the network as follows. First, for each of two items in question, a set of bloggers writing about the item is respectively built. By “plotting” members of each blogger set on the network described above, distributions of the blogger sets are obtained. Then, selecting an appropreate index for measuring proximity of the distributions brings in a correlation between the proximity and the relevance of the items.
Many users are attracted by online social media such as Delicious and Digg, and they put tags on online resources. Relations among users, tags, and resources are represented as a tripartite network composed of three types of vertices. Detecting communities (densely connected subnetworks) from such tripartite networks is important for finding similar users, tags, and resources. For unipartite networks, several attempts have been made for detecting communities, and one of the popular approaches is to optimize modularity, a measurement for evaluating the goodness of network divisions. Modularity for bipartite networks is proposed by Barber, Guimera, Murata and Suzuki. However, as far as the author knows, there is few attempt for defining modularity for tripartite networks. This paper defines a new tripartite modularity which indicates the correspondence between communities of three vertex types. By optimizing the value of our tripartite modularity, better community structures can be detected from synthetic tripartite networks.
This paper proposes a quantification method of three information diffusion properties of the subnetwork that is reachable from an information source in the information diffusion network. This subnetwork is a directed acyclic graph and is composed of three types of directed 2-edge connected subgraphs, which is related to the basic phenomena of information diffusion: information scattering, information gathering, and information transmitting. We define the information scatter degree, the information gather degree, and information transmit degree as information diffusion properties by using the number of three types of directed 2-edge connected subgraphs. We analyze the characteristics and the time series variation of information diffusion attributes in real information diffusion networks extracted from blogspace by these information diffusion properties.
Recently, the rewriting induction, which is one of induction principles for proving inductive theorems of an equational theory, has been extended to deal with constrained term rewriting systems. It has been applied to developing a method for proving equivalence of imperative programs. For proving inductive theorem, there are many cases where appropriate lemmas need to be added. To this end, several methods for lemma generation in term rewriting have been studied. However, these existing methods are not effective for cases in constrained term rewriting. In this paper, we propose a framework of lemma generation for constrained term rewriting systems, in which we formalize the correspondences of terms in divergent equations by means of given constrained rewrite rules. We also show an instance of the formalization, and show that due to the framework with the instance, there is no necessity to give lemmas in advance in the examples shown by the previous works.
In this paper, we propose formal verification for cooperated systems consisting of CPU and DRP. First, we specify CPU as real-time systems and DRP as hybrid systems using hybrid automata. Next, we verify schedulability using model cheking. In order to avoid state explosions, we separate schedulability verifications into the following two verifications. 1. We verify whether parallel composition of CPU automaton and worst DRP is schedulable or not. 2. We verify whether parallel composition of DRP automaton and worst CPU is schedulable or not. We have realized schedulability verification of CPU and DRP by the above two verifications.
Multi-stage programming is a programming style which has multiple stages such as the code generation stage and the code execution stage, and is a promising approach to combine reusability and efficiency. One of the important issues in multi-stage programming languages (MSL, in short) is the safety properties of the generated code: it must be well formed and have no free variables. Taha and Nielsen proposed a type system which guarantees the above properties for MSL, but their target language is a purely functional language without control effects such as exceptions and states. Kameyama, Kiselyov and Shan proposed an effect-and-type system for a language with control operators shift and reset. This paper builds on their work in that its target language has the primitives for code execution and multi-prompt extension of shift and reset. We design a type system for the language and prove its soundness.
System development for software is in accordance with requirements specification. Omissions or errors in requirements specification cause omissions or errors in subsequent deliverables. Therefore, requirements elicitation work in order to prepare requirements specification is a very important process. However, it is very difficult to extract customer requirements for software development without omissions or errors, mainly because a customer and software engineer (SE) do not share common knowledge resulting in omissions or errors in the requirements elicitation work due to poor mutual communication. Therefore, in this paper we would like to suggest a structure to guide requirements elicitation work through interviews in order for SEs to extract customer requirements without omissions or errors by using interview skills. Furthermore, we conducted comparative experiments in regards to the cases of conducting requirements elicitation work by using and not using this structure. As a result, we found that customer requirements were extracted without omissions or errors with the structure proposed in this paper, and successfully verified the effectiveness.
Various distributed algorithms have been used to develop highly reliable and scalable Internet service platforms. Generally, in order to guarantee the legal behavior of such algorithms, it is required to satisfy a set of conditions, such as the maximum number of simultaneous node failures. However, in real system, failure leading to violation of conditions may still happen and cause illegal behavior of the system. To improve the reliability against such failures, we present in this paper a passive replication method based on the self-stabilizing consensus algorithm originally introduced by Dolev et al. Our method is intended to be used for developing Internet service platform that will tolerate simultaneous failure of a large fraction of nodes or failure which may cause inconsistency of states of nodes in the system.
We are developing the SC language system, which facilitates language extensions by translation into C. SC languages are extended/plain C languages with an S-expression based syntax. This paper shows those problems in this system that we have found during our process of the development of extended languages using this system and our improvement of this system as our solutions to them. In particular, we focus on additional features to extend existing translation phases (transformation rule-sets) between languages. These features employ a mechanism that dynamically determines the transformation function to apply for a single code fragment. We implemented this mechanism by using CLOS and dynamic variables in Common Lisp. The proposed reuse mechanism enables us to implement extended languages only by writing the difference from existing translators including the identity translators into the base languages. It also helps us reuse a commonly-used rule-set as part of the entire translation. We implemented various features as extensions to C including multithreading, garbage collection, and load balancing. This paper discusses the multithreading case and shows the effectiveness of the proposed mechanism.
In model checking of concurrent software, a model must have enough accuracy. In this paper, we propose intuitive representation of finite state machine supporting visual understanding of models with graph representation. The target model is FSP, and graph representation is based on LTS. The four visualization methods we propose suppress state explosion of LTS representation and emphasize characteristic state transition patterns which are common in concurrent systems. We made a tool that visualizes models by proposed visualization methods to support understanding of models. In experiment, we apply the methods to sixty one examples. As a result, we confirmed that it could visualize them adequately.
This paper proposes a program camouflage method to protect software from reverse engineering. The user of the proposed method only has to construct a piece of fake source code by modifying a piece of original source code. When an attacker statically analyzes the program that is protected by the method, the program looks like the fake code (with self-modification code fragments). However, when the program is executed, the original code is performed. The proposed method is effective especially in hiding secret instructions/data from static analysis, and preventing the extraction and reuse of secret parts in the program.
Hybrid systems are dynamical systems with continuous evolution of states and discrete evolution of states and governing equations. We have worked on the design and implementation of HydLa, a constraint-based modeling language for hybrid systems, with a view to the proper handling of uncertanties and the integration of simulation and verification. HydLa's constraint hierarchies facilitate the description of constraints with adequate strength, but its semantical foundations are not obvious due to the interaction of various language constructs. This paper gives the declarative semantics of HydLa and discusses its properties and consequences by means of examples.