Recently, significant gains have been made in our understanding of multi-robot systems, and such systems have been deployed in domains as diverse as precision agriculture, flexible manufacturing, environmental monitoring, search-and-rescue operations, and even swarming robotic toys. What has enabled these developments is a combination of technological advances in performance, price, and scale of the platforms themselves, and a new understanding of how the robots should be organized algorithmically. In this paper, we focus on the latter of these advances, with particular emphasis on decentralized control and coordination strategies as they pertain to multi-robot systems. The paper discusses a class of problems related to the assembly of preferable geometric shapes in a decentralized manner through the formulation of descent-based algorithms defined with respect to team-level performance costs.
In this paper, we propose a method to project households of synthetic population using fundamental geospatial data for real-world social simulations. That is, we assign each generated household on a building in a geographical map. When we try to conduct a real-scale social simulation, we need attributes of agents and their locations on a geographical map. We have already proposed a synthetic population method that generates attributes of agents or citizens from the statistics of the real world. To determine the locations of agents, we propose, in this paper, a threefold method to project generated households on buildings in a geographical map using the fundamental geospatial data. We apply the proposed method to project households generated from the statistics of Takatsuki City, Osaka, Japan and project them on buildings in the map. In order to cope with a problem of random assignment of households on buildings, we propose a modified method to consider types and area of buildings. Projection results show that households are assigned more reasonably to isolated houses and apartments.
In this paper, we modify a synthetic reconstruction (SR) method without samples. The synthetic reconstruction method is a method to generate attributes of population such as age, sex and kinship within a family according to available statistics. Although the original SR method employs some individual samples that are collected to make a statistics, it is criticized that generated attributes are only within the samples used in the reconstruction process. In this paper, we employ a simulated annealing-based SR method without samples. We compare two types of generation methods of a candidate solution in a search of simulated annealing: changing age of an agent (age-change) or swapping ages of two agents (age-swap). Results of synthetic reconstruction show that age-change is better when we limit the number of search. On the other hand, age-swap is better when we have enough number of search for reconstructing a population.
In this research, a novel teaching simulation model is proposed, in which the understanding status, knowledge structure, and collaborative effect of each learner are integrated through the concept of a doubly structural network model. The purpose of this research is to investigate how the leaners' understanding will be changed by various teaching styles in a classroom. The proposed model consists of students' mental models, their learning capabilities, in-class learning processes, learning material structures, and their collaborative relationships. In the simulation experiments, we analyse the learning effects of both the different teaching strategies and the seating arrangement of learners on collaborative learning effects. From the simulation analyses, we have found that: (1) the learning effects depend on the difference in teaching strategies, (2) a teaching strategy where learning skills, material structure, and collaborative learning are integrated is the most effective, (3) the seating arrangement affects collaborative learning, and (4) ability classes have adverse effects on learners in collaborative learning.
Leveraging Web 2.0 technologies in public services provision, citizens are no longer passive public service receivers only; rather, they are encouraged to actively participate in and contribute to public services co-production. However, with a still relatively low contribution rate, how to attract heterogeneous citizens to actively engage in such public services is therefore a critical issue. Adopting a service dominant logic perspective, this paper aims to examine the public service co-production as a holistic service system by deploying an agent-based simulation approach. More specifically, we unfold public goods game to examine how to promote heterogeneous citizens' collective behaviors of engaging in public service co-production along with the resulted enrichment of public services. In addition, we integrate a learning mechanism to examine the influence from both online and offline environment towards contributing to public services. A set of scenario analyses is conducted at both the aggregated level and meso-level to support policy evaluation. Simulation results suggest frequent government support and interactions with citizens may encourage more citizens to contribute to the service co-production, and smaller communities with more capable citizens are of a higher rate of contribution. In contrast, community-based learning may not be an effective strategy to promote service co-production compared with the government support.
This paper presents a real time workers' behavior analyzing system using wearable sensors which combine Bluetooth low energy beacon (Beacon) and acceleration sensor to measure production progress and work history data in a cellular manufacturing system. It takes a lot of cost to collect those data on the cellular manufacturing line where workers' work is mainly conducted. For the purpose, we first built an experimental cellular manufacturing line and collected workers' behavioral data. Next, we developed our system and determined analyzing parameters using workers' behavioral data. Finally, we built another experimental cellular manufacturing line, and we measured production progress and work history data from our system. We then compared the result with a conventional visual method using video. The results revealed that our system measured the productivity data in the cellular manufacturing line which does not use a machine, and we could gather production progress and work history data more quickly than the conventional method. We believe that our system will make it possible to increase the efficiency of the supply chain system, to get a quick feedback in daily production, and to improve production.
Central processing unit (CPU) and graphics processing unit (GPU) are weak (“weak” means inefficiency) at detecting information represented by search, reference and recognition for reason of computer architecture. The memorism processor is a memory base processor which complements CPU's weak point. The two memorism processors called set operating processors (SOP) and database processors (DBP) are the device technology that covers the processing that CPUs and GPUs are not good at. Their profitability in various information processing including rapidity and energy saving performance has been proved and therefore there are great expectations for them as a device technology in the post-Moore era. In addition to the conventional SOPs and DBPs, we have developed cross operating processors (XOP), which are excellent at combination/comparison operation and therefore we have applied for a patent of it. These three memorism processors are expected to play a great role in the evolution of the artificial intelligence. This paper is contributed as continuation of [K. Inoue, M. Odaka and C.-K. Pham: Memorism Processor which complements weak point of von Neumann processor, Proc. SII2016, pp. 267-270, 2016], and the authors propose computation that is more suitable for the artificial intelligence era.
Radial basis function (RBF) networks are used for various research field. Especially, they make handling easy for classification and function approximation due to their mathematical form. Some parameters of an RBF network should be carefully selected to obtain good performance for a specific problem. This study investigates the radius of an RBF network in a simple case for sequential approximate optimization. The result shows that there is an effective radius range for sequential approximate optimization methods.
Synchronous and asynchronous algorithms are presented for distributed minimax optimization. The objective here is to realize the minimization of the maximum of component functions over the standard multi-agent network, where each node of the network knows its own function and it exchanges its decision variable with its neighbors. In fact, the proposed algorithms are standard consensus and gossip based subgradient methods, while the original minimax optimization is recast as minimization of the sum of component functions by using a p-norm approximation. A scalable step size depending on the approximation ratio p is also presented in order to avoid slow convergence. Numerical examples illustrate that the algorithms with this step size work well even in the high approximation ratios.
In this paper, we analyze the vulnerabilities due to integrity cyber attacks named zero-stealthy attacks in cyber-physical systems, which are modeled as a stochastic linear time invariant (LTI) system equipped with a Kalman filter, an LQG controller, and a χ2 failure detector. The attacks are designed by a sophisticated attacker so that the measurement residual of the compromised system coincides with the healthy one, and thus it is impossible to detect the attacks. First, we characterize and analyze an existence condition of the attacks from an attacker's standpoint. Then, we extend the attacks into an attacker's goal: The scenario when the adversary wishes to manipulate the systems to an objective designed by him/her. Our results provide that the attacker can manipulate the compromised system to the objective without accessing the networks of real-time sensor or actuator data. Finally, we verify the dangerousness of the attacks through a simple numerical example.
The electrical properties (EPs) of biological tissue, consisting of conductivity and permittivity, provide useful information for the diagnosis of malignant tissues and the evaluation of heat absorption rates. Recently, magnetic resonance electrical properties tomography (MREPT), by which EPs are reconstructed from internal magnetic field data measured by using magnetic resonance imaging (MRI), has been actively studied. We previously proposed an explicit pointwise reconstruction method for MREPT based on a complex partial differential equation (PDE), known as the D-bar equation, of the electric field and its explicit solution given by an integral formula. In this method, as in some other conventional methods, EP values on the boundary of the region of interest must be given as a Dirichlet boundary condition of the PDE. However, it is difficult to know these values precisely in practical situations. Therefore, in this paper, we propose a novel method for reconstructing EPs in a circular region without any knowledge of boundary EP values. Starting from the integral solution to solve the D-bar equation in a circular region with the Neumann boundary condition, we show that the contour integral term of the integral formula is eliminated by using Faraday's law and solve the PDE based only on magnetic field data measured by using MRI. Numerical simulations show that the proposed method yields a good reconstruction results without any knowledge of boundary EP values.