Influence maximization problem is the problem of finding the node set of size k that maximizes the spread of information or disease in a given network. Solving this problem is one of the most important parts of predicting epidemic size or optimizing a viral marketing. This problem is already proved to be NP-Hard and many approximation algorithms have been proposed. Most of these approximation algorithms aim at static networks and few algorithms can be applied to temporal networks, where their network structures change dynamically as the time elapses. In this paper, we propose a new algorithm for influence maximization problem in temporal networks. Proposed algorithm is a greedy algorithm that starts with an empty node set S and adds the node n which maximizes the influence of S ∪ n until |S|=k. We approximate influence of node set S with a heuristic approach because calculating influence of a node set exactly is #P-Hard. Our experiments for comparing the exact solution and solution by our proposed method show that our proposed method outputs almost exact solution on small networks that the exact solution can be obtained in practical time. By another experiments on larger networks, we demonstrate that the proposed method is more effective than the conventional methods of selecting nodes in the order of centrality values and is consistently 640 times faster than greedy method using Monte-Carlo simulation without losing much accuracy.
In this paper, we propose novel centrality measures which extract important nodes from weighted networks like road networks where an actual distance is assigned over each link. Since the distances between nodes are not taken into consideration in traditional centrality measures like closeness and betweenness, there is a limit to an application to a real world problem for road networks with distances. Aiming at extracting important sightseeing spots so as to improve the convenience of tourists, we propose two measures considering actual distances, one is ``detour centrality'' which is an easiness measure of brief detour and the other is ``convenience centrality'' which is an accessibility measure based on traditional closeness and betweenness centrality respectively. Furthermore, when we extracting two or more nodes, there is a problem which these nodes are located in near places with each other only by extracting nodes with high rank of a centrality measure, since the whole balance is not taken into consideration. To overcome these shortcomings, we extend the above-mentioned centrality measures to ``set detour centrality'' and ``set convenience centrality'' and attempt to maximize the utility of all tourists over the target area by extracting set of nodes so as to maximize the values of these set centrality measures. In our experiments using two real sightseeing spot datasets, we show that our extended measures can extract an appropriate set of spots in terms of easiness of detour and accessibility, and these measures are robust to change of distances and emerging some outlier spots.
In marketing science field, modeling of purchase behavior and analysis of brand choice are important research tasks. This paper presents a method that enables such analysis by time-series pattern extraction based on Non-negative Tensor Factorization (NTF). The development of the scanning devices and electronic payments (e.g. online shopping, mobile-phone wallet and electronic money) has led to the accumulation of more detailed POS data including the information about purchase shop, amount of payment, time, location and so on and it brings possibilities for more deep understanding of purchasing behaviors. On the other hand, due to the increase of the number of attributes, it is still difficult to effectively and efficiently handle large feature quantities. In this paper, we consider feature quantities as high-order tensor. Then, using NTF for simultaneous decomposition of multiple attributes, we show analytic effectiveness of pattern factorization for real Beer Item/Brand purchase data. By applying NTF considering three axes: USER-ID × TIME-STAMP × ITEM-ID,we find several temporal tendencies depending on the season.In addition, by focusing on the purchase-pattern correlations between beer items and brands, we find that the tendencies of brand choice strategies appear on the graph drawing results.
This study aims to investigate the structural heterogeneity of real-life supply networks in the automotive industry, through complex network analysis of large-scale empirical data. The concept of complex adaptive systems, which considers the supply network structures as the result of emergence, has recently been well accepted, and a considerable number of models have been proposed. However, such models fail to capture how supply network structures may reflect the effects of various factors -- product characteristics differ in terms of the degree of standardization, modularization and technological advancement, suppliers have different production capabilities, and different car assemblers may have different production strategies. The unique data we collected provides information about ``which suppliers supply which auto-parts to which car manufacturers", which allowed us to first confirm the scale-free property of the distribution of suppliers' production capabilities, and to highlight a wide diversity of product characteristics. Furthermore, the bipartite network constructed based on the collected data was projected onto another information space, which resulted in a product network, exhibiting proximity between products. Analysis of this product network elucidated a high degree of structural heterogeneity of the auto-parts supply network, in which various factors may be reflected. The results of this study will contribute to the establishment of more realistic supply network models.
The game of Hex is the board game with simple rules and is classified as a two-player, zero-sum, logical perfect information game. The game proceeds by putting their pieces in turn on empty cells of the board. A player wins if the player connects the two opposing sides of the board with their own color pieces. Our previous study clarified that it is effective to develop a computer Hex strategies with the network characteristics as the evaluation function to evaluate the board states from the global and local perspectives and showed that there is the best parameter to decide the ratio between global and local evaluation during a match. In order to go beyond the strategy, we hypothesize that the ratio must change during a match depending on the board states as human players differently evaluate the board states at the beginning, middle and final stages. First, we examine the hypothesis whether the better wining rate can be achieved by changing the ratio of global and local evaluation and propose a novel computer Hex program that can evaluate the board states while changing the global and local evaluation by recognizing the board states with SVM. Our proposed method is evaluated with the current world-champion program called MoHex.
The distribution center became more important because the growth of the internet enabled us to buy various products easily on the Internet. In the distribution center, one of the main works is order picking which is to carry the products from shelves of the warehouse according to the orders received from customers. Efficient order picking reduces the large amount of work and the improvement of the order picking is necessary in a larger distribution center. It is important for order picking not only to give workers a good tour plan but also to assign products to the storage shelves in the distribution center. It is known that it is effective to consider frequency order and frequency of co-occurrence at the same time for storage location assignment. The conventional studies proposes the class-based method for frequency order and evaluation function to evaluate frequency order and co-occurrence of orders at the same time. This study proposes a novel method how to assign products to the storage using a co-occurrence network of ordered products and self-organizing map on network topology of the warehouse. Our proposed method is compared with the conventional studies and our experiments show that our proposed method can be more effective than the conventional methods.
In this paper, we propose a new Non-negative Matrix Factorization (NMF) method for consumer behavior pattern extraction. NMF is one of the pattern extraction method and is formulated to factorize a non-negative matrix into the product of two factor matrices. Since various types of datasets are represented by non-negative matrices, NMF could be applied in wide range of research fields including marketing science, natural language processing and brain signal processing. However, more effective extension method is required in a purchase log analysis in marketing operation since marketer needs to extract interpretable patterns from sparse matrix in which most of the elements are zero. Therefore, we propose Non-negative Micro Macro Mixed Matrix Factorization (NM4F) which uses attribution information of both users and items to improve interpretability and capability to deal with sparsity. NM4F is formulated as a method which could simultaneously factorize multiple matrices using shared factor matrices and linear constraint between factor matrices. This formulation enables to increase an amount of available information and to extract consistent patterns with several different aspect. We derive the parameter estimation algorithm by multiplicative update rules. We confirmed the effectiveness of the proposed method in terms of both quality and quantity by using real consumer panel dataset. In addition, we discuss a relation between extracted patterns by the visualization results using graph drawing.
I have studied heterogeneous systems comprised of a large number of agents divided into a small number of groups, and devised a method of designing such system. In a previous paper, I devised the simplest heterogeneous boid model comprised of two types of boids, as an example of such system. When simulated with varying interactions within and between boid groups, the simplest model forms some typical patterns. Among these patterns, I focus on segmented patterns with a clear interface between two boid groups, which provide position information. These patterns are fragile when additional actions are assigned to boid groups. So, in this paper, I improve the simplest model to enhance the interface, by using the designing method. I modify the simplest model, adding the following three factors in sequence: 1) the third type boids, which unify the existing two types of boids strongly, 2) a rule to transmute boid types, which binds the third type boids to the interface between the two boid groups, and 3) a ratio control among boid groups. An improved model with the first factor is named the fixed model, and another one with the first and second factors is named the transition model. The fixed model generates stable segmented patterns with the enhanced interface. In these patterns, the third type boids spread widely around the interface. The transition model generates stable segmented patterns with the enhanced interface, when simulated with the third factor.
The structure of the control software is often complicated. The reason is that it's created by multiple developers and added the functions later. Thus, many software developers want technology to reconstruct decomposing the structure of the software. However, current situation, the grouping of the control software is dependent on the experience and the sense of the skilled person. In addition, the opportunity of the review is very small, generic clustering algorithm has not yet been established. Existing clustering algorithm is not considered to apply to the control software. The problem is the inter-group feedback increase and group size not adjustable. This is because, when inter-group feedback is large, it is difficult to understand the control software, rework increases when division of labor. Also, when there is feedback between the pre-process and the next process at the black box testing, the calculation result of the next step also affects the pre-process, the test process is increased considerably. In this study, we tried the application of the clustering algorithm of graph theory to structure organize the control software. Using a genetic algorithm, and by forming groups based on modularity, while groups of closely related ones, and aid the formation of a grouping less inter-group feedback. Furthermore, it is easy to understand, and for ease of testing, adjustment of the number of groups to a size suitable for control software also aims to be a possibility.
The Web is perhaps the most complex system that we know today. Its massive scale, complex dynamism, open richness, and social character mean that it may be more profitable to study it by using tools and concepts appropriate for understanding nervous systems, organisms, ecosystems and society, rather than approaches more traditionally employed to study engineering technology. Simultaneously, the scientists trying to understand this wide array of complex natural systems may have much to gain by considering the emerging study of the Web. In this paper, taking examples from our recent studies on the Web, we concretely discuss the relevance of the Web as a large model, as opposed to small models often used in physics or biology, for understanding living systems. An idea is forwarded of a default mode network that introduces autonomy, evolvability and homeostasis into the Web. For example, we argue for the existence of two modes of the states in Twitter; the excitation and baseline. The Web turns out to be an excitable media similar to a brain or certain kinds of chemical systems. R. Ashby's laws of requisite variety is also revisited to study its relevance in the light of controlling complex systems.
Modern people are concerned with healthy eating habits; however, sustaining these habits often requires a vigilant self-monitoring and a strong will. The satisfaction found in a meal is influenced not only by the food itself, but also by external stimuli and information. This effect is called expectation assimilation in behavioral science. We propose a social media system that enables people to begin eating meals that are more healthful naturally and without conscious effort. This system uses others' positive evaluations as a trigger of expectation assimilation. Using the proposed system, users share information on their meals and evaluate the yumminess and healthfulness of each other's meals. Novelty of the system is that the system modifies others' evaluations, displaying evaluations of healthfulness as those of yumminess to the user consuming the meal. Therefore, users tend to eat more foods that are evaluated as healthful foods by others and thereby, improve their eating habits without noticing it. In this paper, we report about the mechanism of the proposed system and results of a user study under controlled circumstances. Moreover, we integrated our method with a published mobile application that already had a lot of users. We examined our proposal in the real-world context with the application and, consequently, proved practical effectiveness of the method.