In medical and other applications, expert often use rules with several conditions, each of which involve a quantity within the domain of expertise of a different expert. In such situations, to estimate the degree of confidence that all these conditions are satisfied, we need to combine opinions of several experts – i.e., in fuzzy techniques, combine membership functions corresponding to different experts. In each area of expertise, different experts may have somewhat different membership functions describing the same natural-language (“fuzzy”) term like small. It is desirable to present the user with all possible conclusions corresponding to all these membership functions. In general, even if, for each area of expertise, we have only a 1-parametric family characterizing different membership function, then for rules with 3 conditions, we already have a difficult-to-interpret 3-parametric family of possible consequences. It is thus desirable to limit ourselves to the cases when the resulting family is still manageable – e.g., is 1-parametric. In this paper, we provide a full description of all such families. Interestingly, it turns out that such families are possible only if we allow non-normalized membership functions, i.e., functions for which the maximum may be smaller than 1. We argue that this is a way to go, since normalization loses some information that we receive from the experts.
According to the Nobelist John Nash, if a group of people wants to selects one of the alternatives in which all of them get a better deal than in a status quo situations, then they should select the alternative that maximizes the product of their utilities. In this paper, we provide a new (simplified) derivation of this result, a derivation which is not only simpler – it also does not require that the preference relation between different alternatives be linear.
In many medical applications, we diagnose a disease and/or apply a certain remedy if, e.g., two out of five conditions are satisfied. In the fuzzy case, i.e., when we only have certain degrees of confidence that each of n statement is satisfied, how do we estimate the degree of confidence that k out of n conditions are satisfied? In principle, we can get this estimate if we use the usual methodology of applying fuzzy techniques: we represent the desired statement in terms of “and” and “or,” and use fuzzy analogues of these logical operations. The problem with this approach is that for large n, it requires too many computations. In this paper, we derive the fastest-to-compute alternative formula. In this derivation, we use the ideas from neural networks.
In many practical situations, a user needs our help in selecting the best out of a large number of alternatives. To be able to help, we need to understand the user’s preferences. In decision theory, preferences are described by numerical values known as utilities. It is often not feasible to ask the user to provide utilities of all possible alternatives, so we must be able to estimate these utilities based on utilities of different aspects of these alternatives. In this paper, we provide a general formula for combining utilities of aspects into a single utility value. The resulting formula turns out to be in good accordance with the known correspondence between geometric images and different degrees of happiness.
Millions of lines of code are written every day, and it is not practically possible to perfectly thoroughly test all this code on all possible situations. In practice, we need to be able to separate codes which are more probable to contain bugs – and which thus need to be tested more thoroughly – from codes which are less probable to contain flaws. Several numerical characteristics – known as code quality metrics – have been proposed for this separation. Recently, a new efficient class of code quality metrics have been proposed, based on the idea to assign consequent integers to different levels of complexity and vulnerability: we assign 1 to the simplest level, 2 to the next simplest level, etc. The resulting numbers are then combined – if needed, with appropriate weights. In this paper, we provide a theoretical explanation for the above idea.
In this study, we present an approach to include the importance of symptoms for the diagnosis of syndromes with integrated eastern and western medicine. We also focus on knowledge representation and inference engine of our proposed system using the importance of symptoms. The innovative point of this study is combining the degree of diagnosis of syndrome of Eastern medicine with that of disease of Western medicine when both medicines are associated to a common “disease” name to obtain more accurate diagnosis. Moreover, the importance of symptoms in the inference rules in medical expert systems still has an important role in the diagnosis of syndromes. Based on this approach, the system can adapt more with real clinical practice of integrated eastern and western medicine diagnosis. Finally, examples are provided to demonstrate the advantage of this approach.
Wireless ad hoc network is a self-configurable and dynamically distributed network in which stations can move freely. In the ad hoc network, some flows have difficulty in accessing the channel due to contention at both the medium-access control (MAC) and link layers. The IEEE 802.11 protocol is currently the de facto standard for wireless networks. It uses enhanced distributed channel access (EDCA) method to access the transmission environment of each type of data flow. The size of the contention window (CW) in EDCA is related to the probability of accessing the channel of each flow. In our approach, useful information is obtained from the physical, MAC, and link layers. A fuzzy logic system is then used to adjust the size of CW to rely on such value, thereby improving the fairness index of data flows (voice, video, best effort) in IEEE 802.11 EDCA. The simulation results show that the proposed method can improve the throughput and fairness index of data flows.
In the present paper, we propose a robotic model to help determine a robot’s position under the changing conditions of human personal space in a human-robot group. Recently, several attempts have been made to develop personal robots suitable for human communities. Determining a robot’s position is important not only to avoid collisions with humans but also to maintain a socially acceptable distance from them. Interpersonal space maintained by persons in a community depends on the particular context and situations. Therefore, robots need to determine their own positions while considering the positions of other persons and evaluating the changes made in their personal space. To address this problem, we proposed a robot navigation model and examined whether the experiment participants could distinguish the robot’s trajectory from the human’s trajectory in the experimental scenario. We prepared a scenario in which robots in a group needed to keep an appropriate distance in a three-dimensional space. The experiment participants provided their impressions on robot movements while watching the records representing the scenario. The results indicate that (1) a robot using the proposed model is able to follow the other group members and (2) the experiment participants were not sure whether the trajectories of the robots were controlled by humans and by the proposed model. Therefore, we conclude that the proposed model generates suitable trajectories in robot groups.
Connecting features of face images with the interestingness of a face may assist in a range of applications such as intelligent visual human-machine communication. To enable the connection, we use interestingness and image features in combination with machine learning techniques. In this paper, we use visual saliency of face images as learning features to classify the interestingness of the images. Applying multiple saliency detection techniques specifically to objects in the images allows us to create a database of saliency-based features. Consistent estimation of facial interestingness and using multiple saliency methods contribute to estimate, and exclusively, to modify the interestingness of the image. To investigate interestingness – one of the personal characteristics in a face image, a large benchmark face database is tested using our method. Taken together, the method may advance prospects for further research incorporating other personal characteristics and visual attention related to face images.
Person re-identification (ReID), the task of associating the detected images of a person as he/she moves in a non-overlapping camera network, is faced with different challenges including variations in the illumination, view-point and occlusion. To ensure good performance for person ReID, the state-of-the-art methods have leveraged different characteristics for person representation. As a result, a high-dimensional feature vector is extracted and used in the person matching step. However, each feature plays a specific role for distinguishing one person from the others. This paper proposes a method for person ReID wherein the correspondences between descriptors in high-dimensional space can be achieved via explicit feature selection and appropriate projection with a Gaussian kernel. The advantage of the proposed method is that it allows simultaneous matching of the descriptors while preserving the local geometry of the manifolds. Different experiments were conducted on both single-shot and multi-shot person ReID datasets. The experimental results demonstrates that the proposed method outperforms the state-of-the-art methods.
Based on the results of artificial samples generated in the minority class and through the label regulation of the neighbor samples of the majority class, the precision of the classification prediction for imbalanced learning has clearly been enhanced. This article presents a unified solution combining learning factors to improve the learning performance. The proposed method solves this imbalance through a feature selection incorporating the generation of artificial samples and label regulation. A probabilistic representation is used for all aspects of learning: class, sample, and feature. A Bayesian inference is applied to the learning model to interpret the imbalance occurring in the training data and to describe solutions for recovering the balance. We show that the generation of artificial samples is sample based approach and label regulation is class based approach. We discovered that feature selection achieves surprisingly good results when combined with a sample- or class-based solution.
Pulse-based disease diagnosis and acupuncture therapy are the key components of traditional Oriental medicine. This study aims to model the thinking of medical doctors with regard to their use of pulse-based diagnosis and acupuncture therapy. This paper focuses on a fuzzy inference and knowledge base, which are the main components of the system for pulse based disease diagnosis and acupuncture therapy. The input of the system is the pulse symptoms of the patient with fuzzy degrees, whereas the output is the disease diagnosis and acupuncture therapy prescription. In this system, the knowledge base consists of nearly 1,200 rules for diagnosis and treatment. An evaluation of a group of traditional medical doctors indicates that the results of the newly proposed system are in good accordance with those of doctors practicing traditional medicine. This approach leads to better results than previous approaches because it uses fuzzy logic, which is an appropriate tool here because most entities in traditional medicine are fuzzy in nature. The system of pulse-based disease diagnosis and acupuncture therapy can mimic the thinking of traditional practitioners, and it can be a “good teacher” for medical students who want to learn traditional Vietnamese medicine.
In this research, we propose a stochastic model with the finite horizon of time for sales competition between the state-owned company and private (foreign) competitor. We assume that the foreign company objective function is to maximize revenues and the state-owned agent is concerned about welfare maximization. There are many stochastic models for sales, but what is new in our case is that we assume mixed oligopoly and have different types of firms: private and state owned. They have somewhat different objective functions. As a control variable, we take the advertisement expenses of the private firm. Sale bursts rate depends and the advertisement expenditure and experience stock gained. For the public firm, we assume that advertising efforts are fixed. It means that the optimal control is to maximize private firm revenues taking into account possible uncertainties of stochastic profit flow using Bellman’s optimality condition. We can find out that the Advertisement-Experience (AE) efforts of the private firm are increasing if sales are increasing. Next, the AE might decrease if the experience level of the private firm increases and we have a sales burst. To optimize the governmental policies, we check for optimal AE effort of the public firm so the social welfare achieves the maximum value.
Robots have the potential to facilitate the future education of all generations, particularly children. However, existing robots are limited in their ability to automatically perceive and respond to a human emotional states. We hypothesize that these sophisticated models suffer from individual differences in human personality. Therefore, we proposed a multi-characteristic model architecture that combines personalized machine learning models and utilizes the prediction score of each model. This architecture is formed with reference to an ensemble machine learning architecture. In this study, we presented a method for calculating the weighted average in a multi-characteristic architecture by using the similarities between a new sample and the trained characteristics. We estimated the degree of confidence during a communication as a human internal state. Empirical results demonstrate that using the multi-model training of each person’s information to account for individual differences provides improvements over a traditional machine learning system and insight into dealing with various individual differences.
The human gaze contains substantial personal information and can be extensively employed in several applications if its relevant factors can be accurately measured. Further, several fields could be substantially innovated if the gaze could be analyzed using popular and familiar smart devices. Deep learning-based methods are robust, making them crucial for gaze estimation on smart devices. However, because internal functions in deep learning are black boxes, deep learning systems often make estimations for unclear reasons. In this paper, we propose a visualization method corresponding to a regression problem to solve the black box problem of the deep learning-based gaze estimation model. The proposed visualization method can clarify which region of an image contributes to deep learning-based gaze estimation. We visualized the gaze estimation model proposed by a research group at the Massachusetts Institute of Technology. The accuracy of the estimation was low, even when the facial features important for gaze estimation were recognized correctly. The effectiveness of the proposed method was further determined through quantitative evaluation using the area over the MoRF perturbation curve (AOPC).