This paper considers conjectural variations equilibrium (CVE) in the one item market with a mixed duopoly of competitors. The duopoly is called semi-mixed because one (semi-public) company’s objective is to maximize a convex combination of her net profit and domestic social surplus (DSS). The two agents make conjectures about fluctuations of the equilibrium price occurring after their supplies having been varied. Based on the concepts of the exterior and interior equilibrium, as well as the existence theorem for the interior equilibrium (a.k.a. the consistent CVE, or the exterior equilibrium with consistent conjectures) demonstrated in the authors’ previous papers, we analyze the behavior of the interior equilibrium as a function of the semi-public firm’s level of socialization. When this parameter reflected by the convex combination coefficient tends to 1, thus transforming the semi-public company into a completely public one, and the considered model into the classical mixed duopoly, two trends are apparent. First, for the private company, the equilibrium with consistent conjectures (CCVE) becomes more attractive (lucrative) than the Cournot-Nash equilibrium. Second, there exists a (unique in the case of an affine demand function) value of the convex combination coefficient such that the private agent’s profit is the same in both of the above-mentioned equilibrium types, thus making no subsidy to the producer or to the consumers necessary. Numerical experiments with various mixed duopoly models confirm the robustness of the proposed algorithm for finding the optimal value of the above-mentioned combination coefficient (a.k.a. the semi-public company’s socialization level).
From the viewpoint that the vagueness of a decision maker’s evaluation causes inconsistencies in a pairwise comparison matrix, interval weights have been estimated using the interval AHP. However, the estimated interval weights are often insufficient to express the vagueness of the decision maker’s evaluation. We propose three modified estimation methods for interval weights. The first is based on a relaxation of the optimality of estimated interval weights in the conventional method. The second employs a modified objective function and the third is based on a relaxation of the optimality with respect to the modified objective function. Two of the proposed methods include parameters with degrees of relaxation. Through numerical experiments with 100,000 pairwise comparison matrices generated from 100 true interval weight vectors, we demonstrate the advantages of the proposed methods over the conventional method, and determine the best method and the suitable degree of relaxation.
Noise rejection is an important issue in practical application of FCM-type fuzzy clustering, and noise clustering achieves robust estimation of cluster prototypes with an additional noise cluster for dumping noise objects into it. Noise objects having larger distances from all clusters are designed to be assigned to the noise cluster, which is located in an equal (fixed) distance from all objects. Fuzzy co-clustering is an extended version of FCM-type clustering for handling cooccurrence information among objects and items, where the goal of analysis is to extract pair-wise clusters of familiar objects and items. This paper proposes a novel noise rejection model for fuzzy co-clustering induced by multinomial mixture models (MMMs), where a noise cluster is defined with homogeneous item memberships for drawing noise objects having dissimilar cooccurrence features from all general clusters. The noise rejection scheme can be also utilized in selecting the optimal cluster number through a sequential implementation with different cluster numbers.
This paper considers a fuzzy c-means (FCM) clustering algorithm in combination with deterministic annealing and the Tsallis entropy maximization. The Tsallis entropy is a q-parameter extension of the Shannon entropy. By maximizing the Tsallis entropy within the framework of FCM, statistical mechanical membership functions can be derived. One of the major considerations when using this method is how to determine appropriate values for q and the highest annealing temperature, Thigh, for a given data set. Accordingly, in this paper, a method for determining these values simultaneously without introducing any additional parameters is presented, where the membership function is approximated using a series expansion method. The results of experiments indicate that the proposed method is effective, and both q and Thigh can be determined automatically and algebraically from a given data set.
Human cognitive mechanisms have been studied for the design of user-friendly interfaces. One of the key issues is a sense of agency, which is defined as the sense that “I am the one who is causing this action.” The user interface is important; it can alter the feeling of sense of agency. In this research, we focus on a prime stimulus and evaluate the effect thereof by experiments with participants. A ball moves in a circle on a monitor at a constant speed and participants stop it by pushing a key. They were given both prime stimulus and feedback stimulus and indicated if they were the agent who stopped the ball, i.e., they felt a sense of agency. From the results of the experiment, we found that the prime stimulus can have both a positive and negative influence on the sense of agency when human prediction is unreliable.
Routing mechanism is a key issue in Wireless Sensor Networks, and rumor routing protocol can reduce energy consumption and extend the lifecycle of network with unicast mechanism. However, in Rumor Routing protocol, the transmission path of event message is not optimal and random forwarding mechanism will lead to routing loop. In this paper, we investigate Rumor Routing protocol, and propose an improved rumor routing protocol based on optimized intersection angle theory. Localization technology and vector intersection angle mechanism are brought into the new protocol as two metrics of route selection. The new mechanism can reduce the energy consumption of data transmission. We compare Rumor Routing protocol, GPSR protocol, a Rumor Routing-based protocol, and the improved protocol. Results of simulation experiments indicate that the improved routing protocol can reduce the energy consumption of routing path and extend the lifecycle of network.
Total knee arthroplasty (TKA) is an effective surgery for knees damaged by osteoarthritis, rheumatoid arthritis, and post-traumatic arthritis. This procedure requires an expert surgeon with a high level of skill and experience. Although a navigation system for improving precision and shortening operative time has been already studied, there has not yet been a study done on an instruction system for improving the skills of surgeons. The purpose of this study is to develop a training system that teaches the TKA surgery so that non-expert surgeons can use it to obtain skin-cutting skills. The proposed method includes a simulator for a model knee with a 3D electromagnetism motion tracker. Through experimentation, a method of evaluating incisions into the skin is established by tracing a line with a mock scalpel. The proposed method is applied to six non-experts. The results for the length experiments are 87.82±8.88 (Set 1: non-teaching), 92.66±5.77 (Set 2: teaching), and 92.14±6.17 (Set 3: non-teaching). The results for the position experiments are 70.64±15.11 (Set 1: non-teaching), 83.63±10.07 (Set 2: teaching), and 82.05±7.80 (Set 3: non-teaching). In conclusion, the proposed method succeeds in teaching the operator scalpel incision skills.
This paper presents a novel method of analyzing morphosemantic patterns in language to the detect cyberbullying, or frequently appearing harmful messages and entries that aim to humiliate other users. The morphosemantic patterns represent a novel concept, with the assumption that analyzed elements can be perceived as a combination of morphological information, such as parts of speech, and semantic information, such as semantic roles, categories, etc. The patterns are further automatically extracted from the data containing harmful entries (cyberbullying) and non-harmful entries found on the informal websites of Japanese high schools. These website data were prepared and standardized by the Human Rights Center in Mie Prefecture, Japan. The patterns extracted in this way are further applied to a document classification task using the provided data in 10-fold cross-validation. The results indicate that morphosemantic sentence representation can be considered useful in the task of detecting the deceptive and provocative language used in cyberbullying.
This work aims to tackle the following two research questions regarding post-disaster rescues: how to optimize the rescue team dispatch based on the specialties of the team and the type of damage incurred, and how to optimize the allocation of injured patients to hospitals based on their symptoms, the rescue teams allocated, and the abilities of the hospitals to minimize fatalities. Rather than handling these two problems separately, we formulate them into an integrated system. A real-coded genetic algorithm is applied to minimize the estimated transport time in terms of distance, and the disparity between resource supply and demand. A set of scenarios is simulated and analyzed to provide insight for policy makers. Further, the simulated results can be used for future post-disaster medical assistance training.
Existing cross-media retrieval methods usually learn one same latent subspace for different retrieval tasks, which can only achieve a suboptimal retrieval. In this paper, we propose a novel cross-media retrieval method based on Query Modality and Semi-supervised Regularization (QMSR). Taking the cross-media retrieval between images and texts for example, QMSR learns two couples of mappings for different retrieval tasks (i.e. using images to search texts (Im2Te) or using texts to search images (Te2Im)) instead of learning one couple of mappings. QMSR learns two couples of projections by optimizing the correlation between images and texts and the semantic information of query modality (image or text), and integrates together the semi-supervised regularization, the structural information among both labeled and unlabeled data of query modality to transform different media objects from original feature spaces into two different isomorphic subspaces (Im2Te common subspace and Te2Im common subspace). Experimental results show the effectiveness of the proposed method.
The paper introduces a rough set model to analyze an information system in which some conditions and decision data are missing. Many studies have focused on missing condition data, but very few have accounted for missing decision data. Common approaches tend to remove objects with missing decision data because such objects are apparently considered worthless from the perspective of decision-making. However, we indicate that this removal may lead to information loss. Our method retains such objects with missing decision data. We observe that a scenario involving missing decision data is somewhat similar to the situation of semi-supervised learning, because some objects are characterized by complete decision data whereas others are not. This leads us to the idea of estimating potential candidates for the missing data using the available data. These potential candidates are determined by two quantitative indicators: local decision probability and universal decision probability. These potential candidates allow us to define set approximations and the definition of reduct. We also compare the reducts and rules induced from two information systems: one removes objects with missing decision data and the other retains such objects. We highlight that the knowledge induced from the former can be induced from the latter using our approach. Thus, our method offers a more generalized approach to handle missing decision data and prevents information loss.
Nowadays, a feature encoding strategy is a general approach to represent a document, an image or audio as a feature vector. In image recognition problems, this approach treats an image as a set of partial feature descriptors. The set is then converted to a feature vector based on basis vectors called codebook. This paper focuses on a prior probability, which is one of codebook parameters and analyzes dependency for the feature encoding. In this paper, we conducted the following two experiments, analysis of prior probabilities in state-of-the-art encodings and control of prior probabilities. The first experiment investigates the distribution of prior probabilities and compares recognition performances of recent techniques. The results suggest that recognition performance probably depends on the distribution of prior probabilities. The second experiment tries further statistical analysis by controlling the distribution of prior probabilities. The results show a strong negative linear relationship between a standard deviation of prior probabilities and recognition accuracy. From these experiments, the quality of codebook used for feature encoding can be quantitatively measured, and recognition performances can be improved by optimizing codebook. Besides, the codebook is created at an offline step. Therefore, optimizing codebook does not require any additional computational cost for practical applications.
A method based on singular value decomposition (SVD) is proposed for extracting features from motion time-series data observed with various sensing systems. Matrices consisting of the sliding window (SW) subsets of time-series data are decomposed, yielding singular vectors as the patterns of the motion, and the singular values as a scalar, by which the corresponding singular vectors describe the matrices. The sliding window based singular value decomposition was applied to analyze acceleration during walking. Three levels of walking difficulty were simulated by restricting the right knee joint in the measurement. The accelerations of the middles of the shanks and the back of the waist were measured and normalized before the SW-SVD was performed.The results showed that the first singular values inferred from the acceleration data of the restricted side (the right shank) significantly related to the increase of the restriction among all the subjects while there were no common trends in the singular values of the left shank and the waist. The SW-SVD was suggested to be a reliable method to evaluate walking disability. Furthermore, a 2D visualization tool is proposed to provide intuitive information about walking difficulty which can be used in walking rehabilitation to monitor recovery.
This paper presents a new approach combining Branch and Price (B&P) with metaheuristics to derive various high-quality schedules as solutions to a nurse scheduling problem (nurse rostering problem). There are two main features of our approach. The first is the combination of B&P and metaheuristics, and the second is the implementation of an efficient B&P algorithm. Through applying our approach to widely used benchmark instances, the effectiveness of our approach is determined.
To enable us to select only the specific scenes that we want to watch in a baseball video and personalize its highlights sub-video, we require an Automatic Baseball Video Tagging system that can divide a baseball video into multiple sub-videos per at-bat scene automatically and append tag information relevant to at-bat scenes. Towards developing the system, the previous papers proposed several Tagging algorithms using ball-by-ball textual reports and voice recognition, and tried to refine models for baseball games. To improve its robustness, this paper proposes a novel Tagging method that utilizes multiple kinds of play-by-play comment patterns for voice recognition which represent the situation of at-bat scenes and take their “Priority” into account. In addition, to search for a voice-recognized play-by-play comment on the start/end of at-bat scenes, this paper proposes a novel modelling method called as “Local Modelling,” as well as Global Modelling used by the previous papers.
Extreme learning machine (ELM) is an effective machine learning technique that widely used in image processing. In this paper, a new supervised method for segmenting blood vessels in retinal images is proposed based on the ELM classifier. The proposed algorithm first constructs a 7-D feature vector using multi-scale Gabor filter, Hessian matrix and bottom-hat transformation. Then, an ELM classifier is trained on gold standard examples of vessel segmentation images to classify previous unseen images. The algorithm was tested on the publicly available DRIVE database – a digital image database for vessel extraction. Experimental results on both real-captured images and public database images demonstrate that our method shows comparative performance against other methods, which make the proposed algorithm a suitable tool for automated retinal image analysis.
In this paper, we propose a brain-computer interface (BCI) based on collaborative steady-state visually evoked potential (SSVEP). A technique for estimating the common direction of the gaze of multiple subjects is studied with a view to controlling a virtual object in a virtual environment. The electro-encephalograms (EEG) of eight volunteers are simultaneously recorded with two virtual cubes as visual stimuli. These two virtual cubes flicker at different rates, 6 Hz and 8 Hz, and the corresponding SSVEP is observed around the occipital area. The amplitude spectra of the EEG activity of individual subjects are analyzed, averaged, and synthesized to obtain the collaborative SSVEP. Machine learning is applied to estimate the common gaze direction of the eight subjects with the supervised data from fewer than eight subjects. The estimation accuracy is perfect only in the case of the collaborative SSVEP. One-dimensional control of a virtual ball is performed by controlling the common eye gaze direction, which induces the collaborative SSVEP.
Indonesia is a country with a high earthquake intensity which brings significant impact on a lot of infrastructure damage, including building houses in every incident of a natural earthquake. The assessment model on earthquake damage with a fuzzy system has previously developed. It was aimed to assess the building damage rate after earthquake events, and it has a particular weakness on both the criteria used and the rate of model accuracy. The study was conducted to develop fuzzy inference model to determine the building damage hazard, especially for non-engineered building houses on a particular earthquake event (mitigation). The model was is a three-stage fuzzy rule-based model using a thousand data of building houses damaged as result of the impact of earthquake in Bener Meriah district, Aceh Province, Indonesia in the 2013 event, the peak ground acceleration (PGA) data, slope data extracted from 30 meters digital elevation model (DEM) and distance from major fault that was extracted from geological structure map. The main contribution of the research that has been done is to develop the function and fuzzy membership for each determinant variable of building house damage hazard and three stage fuzzy inference process to determine the building house damage hazard as an impact of an earthquake event. Using four hundred data of building houses damage as an impact of the earthquake at the same location, a three-stage fuzzy rule-based model that has been implemented in the study was proven to be able to determine the level of damaged building houses especially for non-engineered building houses better than the previous models with model performance by 93%.
In this study, a new method to realize majority rule is presented by using noninvasive brain activities. With the majority rule based on an electroencephalogram (EEG), a technique to determine the attention of multiple users is proposed. In general, a single-shot EEG ensures short-time response, but it is inevitably deteriorated by artifacts. To enhance the accuracy of the majority rule, the collaborative signals of P300 evoked potentials are focused. The collaborative P300 signal is prepared by averaging individual single-shot P300 signals among subjects. In experiments, the EEG signals of twelve volunteers were collected by using auditory stimuli. The subjects paid attention to target stimuli and no attention to standard stimuli. The collaborative P300 signal was used to evaluate the performance of the majority rule. The proposed algorithm enables us to estimate the degree of attention of the group. The classification is based on supervised machine learning, and the accuracy approximately 80%. The applications of this novel technique in multimedia content evaluations as well as neuromarketing and computer-supported co-operative work are discussed.