Localization is one of the key techniques in wireless sensor network. While the global positioning system (GPS) is one of the most popular positioning technologies, the weakness of high cost and energy consuming makes it difficult to install in every node. In order to reduce the cost and energy consumption only a few nodes, called beacon nodes, are equipped with GPS modules. The remaining nodes obtain their locations through localization. In order to find the minimum positions of beacons, a resolving set with minimal cardinality has been obtained in the network which is called metric basis. Simultaneous local metric basis of the network is also given in which each pair of adjacent vertices of the network is distinguished by some element of simultaneous local metric basis which makes the network design more reasonable. In this paper a new network, the generalized Möbius ladder Mm,n, has been introduced and its metric dimension and simultaneous local metric dimension of its two subfamilies have been calculated.
This paper proposes observer-based piecewise multi-linear controllers for nonlinear systems using feedback and observer linearizations. The piecewise model is a nonlinear approximation and fully parametric. Feedback linearizations are applied to stabilize the piecewise multi-linear control system. Furthermore, observer linearizations are more conservative in modeling errors compared with feedback linearizations. In this paper, we propose robust observer designs for piecewise multi-linear systems. Moreover, we design piecewise multi-linear controllers that combine the robust observer with various performance such as a regulator and tracking controller. These design methods realize a separation principle that allows an observer and a regulator to be designed separately. Examples are demonstrated through computer simulation to confirm the feasibility of our proposals.
The maturing of autonomous technology has fostered a rapid expansion in the use of Autonomous Underwater Vehicles (AUVs). To prevent the loss of AUVs during deployments, existing risk analysis approaches tend to focus on technicalities, historical data and experts’ opinion for probability quantification. However, data may not always be available and the complex interrelationships between risk factors are often neglected due to uncertainties. To overcome these shortfalls, a hybrid fuzzy system dynamics risk analysis (FuSDRA) is proposed. The approach utilises the strengths while overcoming limitations of both system dynamics and fuzzy set theory. Presented as a three-step iterative framework, the approach was applied on a case study to examine the impact of crew operating experience on the risk of AUV loss. Results showed not only that initial experience of the team affects the risk of loss, but any loss of experience in earlier stages of the AUV program have a lesser impact as compared to later stages. A series of risk control policies were recommended based on the results. The case study demonstrated how the FuSDRA approach can be applied to inform human resource and risk management strategies, or broader application within the AUV domain and other complex technological systems.
MMMs-induced fuzzy co-clustering achieves dual partition of objects and items by estimating two different types of fuzzy memberships. Because memberships of objects and items are usually estimated under different constraints, the conventional models mainly targeted object clusters only, but item memberships were designed for representing intra-cluster typicalities of items, which are independently estimated in each cluster. In order to improve the interpretability of co-clusters, meaningful items should not belong to multiple clusters such that each co-cluster is characterized by different representative items. In previous studies, the item sharing penalty approach has been applied to the MMMs-induced model but the dual exclusive constraints approach has not yet. In this paper, a heuristic-based approach in FCM-type co-clustering is modified for adopting in MMMs-induced fuzzy co-clustering and its characteristics are demonstrated through several comparative experiments.
The imbalanced dataset is a crucial problem found in many real-world applications. Classifiers trained on these datasets tend to overfit toward the majority class, and this problem severely affects classifier accuracy. This ultimately triggers a large cost to cover the error in terms of misclassifying the minority class especially in credit-granting decision when the minority class is the bad loan applications. By comparing the industry standard with well-known machine learning and ensemble models under imbalance treatment approaches, this study shows the potential performance of these models towards the industry standard in credit scoring. More importantly, diverse performance measurements reveal different weaknesses in various aspects of a scoring model. Employing class balancing strategies can mitigate classifier errors, and both homogeneous and heterogeneous ensemble approaches yield the best significant improvement on credit scoring.
In this study, we present a fuzzy counterpart to the probabilistic latent semantic analysis (PLSA) approach. It is derived by solving the optimization problem of Tsallis entropy-penalizing free energy of a pseudo PLSA model by using a modified i.i.d. assumption. This derivation is similar to that of the conventional fuzzy counterpart of the PLSA, which involves solving the optimization problem of Shannon entropy-penalizing free energy. Furthermore, the proposed method is validated using numerical examples.
In order to support inspiration of potential technical solutions, this paper considers visualization of solving means varied in patent documents through SOM. Non-structured patent document data can be quantified through two different scheme: word level co-occurrence probability vectors and correlation coefficients of the generated co-occurrence probability vectors. Comparing the two SOMs derived with the above schemes is useful for supporting innovation acceleration through extraction of important pairs of related factors in new technology development. In this paper, co-cluster structures are utilized for emphasizing field-related solutions by constructing multiple SOMs after co-clustering. Document × keyword co-occurrence analysis achieves extraction of co-clusters consisting of mutually related pairs in particular fields. Additionally, this paper also considers an extension to a multi-view situation, where each patent is characterized by additional patent classification system of F-term by Japan Patent Office. Through multi-view co-clustering among documents × keywords × F-terms, theme field-related knowledge is demonstrated to be extracted.
Diabetes diagnosis is important due to the high death rate and complication consequences caused by the disease. First, we propose a kernel k-means-based prediction method and explore attribute selections for effective and robust diabetes diagnosis. This method derives homogeneous sub-clusters in the high dimensional kernelized feature space to compute the distance of a new instance to those sub-clusters, and then apply the 1-nearest neighbor to classify it as positive or negative to the disease. Our experimental results could identify the best effective attribute group for each considered prediction method and show that the proposed method outperforms the existing ones for the task. Second, we introduce our developed diabetes visualization and decision support system, named DIAVIS, which is equipped with the proposed prediction method. This system can support doctors to diagnose diabetes and track patient health progress to prescribe proper medications in a treatment process.
Uncertainty aggregation is an important reasoning for making decisions in the real world, which is full of uncertainty. The paper proposes an information source model for aggregating epistemic uncertainties about truth and discusses uncertainty aggregation in the form of possibility distributions. A new combination rule of possibilities for truth is proposed. Then, this paper proceeds to discussion about a traditional but seemingly forgotten representation of uncertainty (i.e., certainty factors (CFs)) and proposes a new interpretation based on possibility theory. CFs have been criticized because of their lack of sound mathematical interpretation from the viewpoint of probability. Thus, this paper first establishes a theory for a sound interpretation using possibility theory. Then it examines aggregation of CFs based on the interpretation and some combination rules of possibility distributions. The paper proposes several combination rules for CFs having sound theoretical basis, one of which is exactly the same as the oft-criticized combination.
Studies on the deployment of sensors mostly involve a 2D plane or 3D volume. However, the optimal sensor deployment in field environments is actually the resource distribution on 3D surfaces. Compared with the traditional deployment environments, field environments are more complicated, owing to some interferences on the detection capability of sensors and limitations on the maneuverability of platforms. In this paper, an optimal sensor deployment algorithm in 3D complex environments is discussed. First, considering the characteristics of field environments, the maneuverability matrix of heterogeneous platforms was introduced as a constraint. Then, a non-isomorphic environment value distribution map was constructed to mark the differences among mission areas. Furthermore, the sensor detection range model was improved to better deal with the occlusion issue. Finally, based on the multi-objective particle swarm optimization (MOPSO) algorithm, a sensor deployment strategy was deployed for complex environments. Experiments demonstrated that the proposed algorithm can better deal with the sensor deployment problem in field environments, while improving the detection accuracy of the objects in mission areas.
In recent years, educational support robots that assist learners have attracted attention. The main role of teacher-type robots in previous research has been to teach students how to solve problems and to explain learning material. Under such conditions, students may not learn the material adequately due to their reliance on the support of the robot; this paper utilizes the cognitive apprenticeship theory in order to prevent this problem. The cognitive apprenticeship theory asserts that the support provided to a student should change according to the student’s learning situation. Previous studies have reported that pedagogy based on the cognitive apprenticeship theory can improve students’ learning skills. Therefore, we hypothesize that students’ learning will improve when robots teach them how to solve questions based on the cognitive apprenticeship theory. In this paper, we investigate the learning effects of robot teaching based on the cognitive apprenticeship theory in collaborative learning with junior high-school and university students. The results of this experiment suggest that collaborative learning with robots that employ the cognitive apprenticeship theory improves the learning of high-school and university students.
Organizations are interested in exploiting the data from the other organizations for better analyses. Therefore, the data related policies of organizations should be sensitive to the data privacy issue, which has been widely discussed recently. The present study is focused on inter-group data usage for a relative evaluation. This research is based on the data envelopment analysis (DEA), which is used to measure the efficiency of a decision making unit (DMU) relatively employed within a group. In DEA, establishing an efficient frontier consisting of efficient DMUs is essential. We can obtain the efficiency values of a DMU by projecting it to the efficient frontier, and including in the efficiency interval via the interval DEA. When the original data of multiple groups are not open to each other, the alternative is to exchange the information corresponding to the efficient frontiers to estimate the efficiency intervals of a DMU in such a manner that the alternative is in the other groups. Therefore, in this paper, we propose a method to replace the efficient frontier with a weight vector set, from which it is not possible to reconstruct the original data. Considering the weight vector sets of multiple groups, a DMU has three types of efficiency intervals: in its own group, in each of the other groups, and in the integrated group. They provide rich insights on the DMU from a broad perspective, and this encourages inter-group data usage. In this process, we focus on two types of information reduction: one is from the efficient frontier to the weight vector set, and the other is from a union of the groups to the integrated group.
Images of the same object taken by multiple different cameras should have the same color reproduction. However, the images sometimes show different color reproduction due to the individual differences of cameras or internal camera parameters automatically determined when the images are taken. Conventional color transfer methods can be used for unifying the color reproduction of images by transforming the color distribution of an image to that of a reference image. However, conventional methods do not always lead to a good color reproduction and sometimes result in the loss of color impression of original images. In the present paper, we propose a color calibration method for images of the same object taken by different cameras. Two color transfer methods are combined to realize color calibration without the loss of color impression of an original image. Resultant images obtained by the color transfer methods are appropriately mixed into an output image. In experiments, the proposed method is applied to a variety of images and the effectiveness of the proposed method is confirmed by subjective and objective evaluations.
The realization of effective and low-cost drug discovery is imperative to enable people to easily purchase and use medicines when necessary. This paper reports a smart system for detecting iPSC-derived cancer stem cells by using conditional generative adversarial networks. This system with artificial intelligence (AI) accepts a normal image from a microscope and transforms it into a corresponding fluorescent-marked fake image. The AI system learns 10,221 sets of paired pictures as input. Consequently, the system’s performance shows that the correlation between true fluorescent-marked images and fake fluorescent-marked images is at most 0.80. This suggests the fundamental validity and feasibility of our proposed system. Moreover, this research opens a new way for AI-based drug discovery in the process of iPSC-derived cancer stem cell detection.
Open data are becoming increasingly available in various domains, and many organizations rely on making decisions according to data. Such decision making requires care to distinguish between correlations and causal relationships. Among data analysis tasks, causal relationship analysis is especially complex because of unobserved confounders. For example, to correctly analyze the causal relationship between two variables, the possible confounding effect of a third variable should be considered. In the open-data environment, however, it is difficult to consider all possible confounders in advance. In this paper, we propose a framework for exploratory causal analysis of open data, in which possible confounding variables are collected and incrementally tested from a large volume of open data. To the extent of the authors’ knowledge, no framework has been proposed to incorporate data for possible confounders in causal analysis process. This paper shows an original way to expand causal structures and generate reasonable causal relationships. The proposed framework accounts for the effect of possible confounding in causal analysis by first using a crowdsourcing platform to collect explanations of the correlation between variables. Keywords are then extracted using natural language processing methods. The framework searches the related open data according to the extracted keywords. Finally, the collected explanations are tested using several automated causal analysis methods. We conducted experiments using open data from the World Bank and the Japanese government. The experimental results confirmed that the proposed framework enables causal analysis while considering the effects of possible confounders.
There is little research into designing artificial motivational agents. The end-goal of our studies is therefore to create a dialogue system that would motivate users to do their everyday tasks using natural language. In this paper, we present a method of distinguishing texts containing motivational advice from regular texts to sort out noise in training data for our dialogue system. We implemented a novel method of chaining two shallow networks together by utilizing the output results of the first network to determine the input for the second one. We achieved F-score of 0.94 and 0.97 with our proposed method. The contributions of this paper are threefold: first, we successfully identified 14 hand-crafted features that make a text motivational/advisory. Secondly, we were able to create a classifying algorithm that distinguishes motivational/advisory texts from regular ones. Finally, our proposed method can be applied to other text classification tasks.
In this study, we investigate whether group norms occur in human–robot groups. At present, there are a number of studies that examine social robots’ ways of responding, gesturing, and displaying emotion. However, sociality implies that robots not only exhibit human-like behaviors, but also display the tendency to adapt to a group of individuals. For robots to exhibit sociality, they must adapt to group norms without being told by the group members how to behave. Group norms refer to the unwritten, unspoken, and informal rules that are present in a group of individuals. In a previous study, we demonstrated that a robot model learned group norms in human groups . In the present study, we investigate whether group norms occur in human–robot groups. To this end, we prepared quizzes with unclear and vague answers, and instructed participants to take the quizzes with the robot. The results of the quiz experiments demonstrated that the robot considered group norms in human–robot groups when making decisions; thus, group norms occurred in human–robot groups.