Computational Intelligent (CI) techniques have become an apparent need in many bioinformatics applications. In this article, we make the interested reader aware of the necessity of CI, providing a basic taxonomy of proteomics, and discussing their use, variety and potential in a number of both common as well as upcoming proteomics application.
Protein structural class prediction (SCP) is as important task in identifying protein tertiary structure and protein functions. In this study, we propose a feature extraction technique to predict secondary structures. The technique utilizes bigram (of adjacent and k-separated amino acids) information derived from Position Specific Scoring Matrix (PSSM). The technique has shown promising results when evaluated on benchmarked Ding and Dubchak dataset.
A feature extraction method from capnograms used for classifying asthma is proposed based on wavelet decomposition. Its computational cost is low and its performance is adequate for classifying asthma in real time. Experiments performed using 23 capnograms from an asthma camp in Cuba showed 97.39% best classification accuracy. The time required for a physiological multiparameter monitor to determine the suitable features of capnograms averaged 8 seconds. The proposal is to be used as part of a decision support system for asthma classification being developed by TITECH and TMDU research groups.
This paper describes a foot-age estimation system based on fuzzy logic. The foot-age is one of age related indexes, and it shows the degree of aging by the gait condition. The system estimates the foot-age from sole pressure distribution change during walking. The sole pressure distribution is acquired by a mat-type load distribution sensor. Our estimation system extracts four gait features from sole pressure data, and calculates fuzzy degrees for young age, middle age and elderly age groups from these gait features. The foot-age of the walking person on the sensor is calculated by fuzzy MIN-MAX center of gravity method. In our experiment, we employed 93 male and 132 female volunteers, and the system estimated their foot-ages with low mean absolute error for their true ages. Additionally, we developed a diagnosis system based on estimated foot-age.
Visual attention region prediction has attracted the attention of intelligent systems researchers because it makes the interaction between human beings and intelligent nonhuman agents to be more intelligent. Visual attention region prediction uses multiple input factors such as gestures, face images and eye gaze position. Physically, disabled persons may find it difficult to move in some way. In this paper, we propose using gaze position estimation as input to a prediction system achieved by extracting image features. Our approach is divided into two parts: user gaze estimation and visual attention region inference. The neural network has been used in user gaze estimation as the decision making unit, following which the user gaze position at the computer screen is then estimated. We proposed that prediction in visual attention region inference of the visual attention region be inferred by using fuzzy inference after image feature maps and saliency maps have been extracted and computed. User experiments conducted to evaluate the prediction accuracy of our proposed method surveyed prediction results. These results indicated that the prediction we proposed performs better at the attention regions position prediction level depending on the image.
In this paper, we perform image labeling based on the probabilistic integration of local and global features. Several conventional methods label pixels or regions using features extracted from local regions and local contextual relationships between neighboring regions. However, labeling results tend to depend on local viewpoints. To overcome this problem, we propose an image labeling method that utilizes both local and global features. We compute the posterior probability distributions of the local and global features independently, and they are integrated by the product. To compute the probability of the global region (entire image), Bag-of-Words is used. In contrast, local co-occurrence between color and texture features is used to compute local probability. In the experiments, we use the MSRC21 dataset. The result demonstrates that the use of global viewpoint significantly improves labeling accuracy.
An efficient super resolution algorithm based on edge direction is proposed based on the dimensional reduction of color images and 3 types of edge direction. The basic idea is to reduce image – especially color image – dimensions and to interpolate pixels by using 3 simple edge directions – vertical, horizontal, and diagonal. The proposed algorithm conceivably eliminate more color artifacts than Bicubic. The results of experiments using 30 natural images confirm that PSNR of the proposed method achieve the same quality as Fast Curvature Based Interpolation (FCBI). We confirmed that computation time for the proposed method was 40% shorter for RGB and 20% shorter for grayscale images than the previous fastest method, i.e., FCBI. We show efficient panorama image generation based on the proposed super resolution method as one application of our proposal.
Information on bone thickness is useful to surgeons in fixing pedicle screws in place. The quality of pedicle screw insertion continues to increase with the introduction of such techniques as navigation based on computed tomography and fluoroscopy. These techniques reduce error in pedicle screw placement and injury. However, the information reported on the real time measurement of depths drilled through cancellous bone, also known as trabecular bone or sponge bone, by the pedicle screw is minimal. It currently depends on palpation by the physician for judging the boundary between cortical and cancellous bone – an inaccurate technique that may produce errors in screw placement and the risk of injury during surgical processes. Ultrasound is used to help overcome such problems. Bone thickness is estimated in this study using an ultrasound transducer attached to 20 mm of polymethyl methacrylate, a clear glass-like acrylic. The bone thickness of five specimens was measured using ultrasound echo signals. Error in estimating bone thickness was small, 8.121%, showing the accuracy in bone thickness to be more than 90.00% which is suitable for use in estimating bone thickness in pedicle screw insertion.
Sustainable waste management systems necessarily include many interacting factors. Due to the complexity and uncertainties occurring in sustainable waste management systems, we propose the use of Fuzzy Cognitive Maps (FCM) and Bacterial Evolutionary Algorithm (BEA)  to support the planning and decision making process of integrated systems, as the combination of methods FCM and BEA seems to be suitable to model such complex mechanisms as Integrated Waste Management Systems (IWMS). This paper is an attempt to assess the sustainability of the IWMS in a holistic approach. While the FCM model represents the IWMS as a whole, the BEA is used for parameter optimization and identification. An interpretation of the results obtained by the FCM for the actual regional IWMS is also presented. We have obtained some surprising results, contradicting the general assumptions in the literature concerning the relative importance of constituting components in waste management systems.
Digital image watermarking based on singular value decomposition (SVD) is highly robust against misuse, but lacks the ability to distinguish whether watermarks are correct due to the importance of singular values being lower than two orthogonal matrices. To achieve highly accurate watermark extraction while maintaining high robustness, we propose robust watermarking based on discrete wavelet transform (DWT) and n-diagonalization formalized by Householder transformation. We propose that DWT be used to ensure visibility and that n-diagonalization be used to control information quantity related to watermark extraction accuracy. Experimental results confirm the robustness of our proposed method and that the extraction accuracy of the proposed method is approximately 2 times better than that of SVD.
The effect of option markets on their underlying markets has been studied intensively since the first option contract was listed. Despite considerable effort, including the development of theoretical and empirical approaches, we do not yet have conclusive evidence on this effect. We investigate the effect of option markets, especially that of dynamic hedging, on their underlying markets by using an artificial market. We propose a two-market model in which an option market and its underlying market interact. In our model, there are three types of agents, underlying local agents trading only on the underlying market, option local agents who trade only on the option market, and global agents who trade both on the underlying and the option market. In this simulation, we investigate the effect of hedgers, a global agent, to the underlying market. Hedgers who have option contracts trade the underlying asset to keep a delta neutral position. This hedge behavior is called dynamic hedging. We simulate two scenarios; one is the hedge with low frequency and the other is the hedge with high frequency that hedger can send hedge order anytime when hedge miss appears. We confirmed that dynamic hedging increases or decreases the volatility of the underlying market under certain conditions.
The network organization (NO) is flexible and changes rapidly to address events in volatile environments. These organizations are preferred to traditional organizations that are networked. The property of the NO that enables it to change so rapidly is plasticity. A model is presented for spontaneously formed NO and the quality of plasticity is discussed in the context of this model. We touch on how this model accounts for external change in an environment through internal adjustment. A case study illustrates the main tenets of our conceptualization.
A Distributed Constraint Optimization Problem (DCOP) is a fundamental problem that can formalize various applications related to multi-agent cooperation. Many application problems in multi-agent systems can be formalized as DCOPs. However, many real world optimization problems involve multiple criteria that should be considered separately and optimized simultaneously. A Multi-Objective Distributed Constraint Optimization Problem (MO-DCOP) is an extension of a mono-objective DCOP. Compared to DCOPs, there exists few works on MO-DCOPs. In this paper, we develop a novel complete algorithm for solving an MO-DCOP. This algorithm utilizes a widely used method called Pareto Local Search (PLS) to generate an approximation of the Pareto front. Then, the obtained information is used to guide the search thresholds in a Branch and Bound algorithm. In the evaluations, we evaluate the runtime of our algorithm and show empirically that using a Pareto front approximation obtained by a PLS algorithm allows to significantly speed-up the search in a Branch and Bound algorithm.
Highly detailed reproducibility of multi-agent simulations is strongly demanded. To realize such highly reproducible multi-agent simulations, it is important to make each agent respond to its dynamically changing environment as well as scale the simulation to cover important phenomena that could be produced. In this paper, we present a programming framework to realize highly scalable execution of them as well as detailed behaviors of agents. The framework can help simulation developers utilize many GPGPU-based parallel cores in their simulation programs by using the proposed OpenCL-based multi-platform agent code conversion engine. We show our prototype implementation of the framework and how our framework can help simulation developers to code, test, and evaluate their agent codes which select actions and path plants reactively in dynamically changing large-scale simulation environments on various hardware and software settings.
As new network communication tools are developed, social network services (SNS) such as Facebook and Twitter are becoming part of a social phenomenon globally impacting on society. Many researchers are therefore studying the structure of relationship networks among users. We propose a greedy network growth model that appropriately increases nodes and links while automatically reproducing the target network. We handle a wide range of networks with high expressive ability. Results of experiments showed that we accurately reproduced 92.4% of 189 target networks from real services. The model also enabled us to reproduce 30 networks built up by existing network models. We thus show that the proposed model represents the expressiveness of many existing network models.
Nowadays, users of Twitter, one of famous social networking service, have rapidly increased in number, and many people have been exchanging information by Twitter. When the Great East Japan Earthquake struck in 2011, people were able to obtain information from social networking services. Though Twitter played an important role, one problem was especially pointed out: false rumor diffusion. In this study, we propose an information diffusion model based on the SIR model and discuss how to prevent false rumor diffusion.
Negotiation is one means for making decision collaboratively. We propose efficient protocols for identifying deals in such negotiations. Specifically we focus on situations in which the negotiators must agree upon one option from among many. This work proposes solutions to problems faced when automating negotiations over multiple and interdependent issues. When negotiations are over issues that are interdependent, previous and future decisions concerning other issues affect how one decides the current issue. Therefore generally we must deal with all of the issues at the same time. To identify deals for negotiations over multiple and interdependent issues previous work has proposed a bidding based protocol that works well only when there is a high probability that agents in the negotiation have local maxima at similar positions in the contract space. This happens only when the contract space is small and the number of agents in the negotiation is low. Otherwise the protocol fails to identify deals. We propose a multi round bidding approach in which agents submit supersets of their bids from earlier rounds. A superset of a bid is created by relaxing the constraints that it satisfies. We will use the same concept of negotiation using relaxed constraints to extend a Hill Climbing (HC) protocol. HC has a linear execution time cost. Ordinarily it can not be used for complex negotiations. But we modify it so that it is used optimally and efficiently for such negotiations.
In this paper, we analyze a meta-rewards game which is part of a generalized metanorms game. We theoretically analyze the game and conduct computer simulations on it. We clarify the payoff structure as to the benefit and the cost of reward and meta-reward actions. Whereas the benefit of a meta-reward, which is a rewarding action for the other rewarding actions, should be greater than its cost, it should not be much greater than the benefit of a rewarding action for other cooperative actions. This insight can be applied to the actions of users of social media, and we propose a set of policies for managers of social media services.
We studied a public good game, in which metanorms work, permitted new entries. Although whoever wants to entry the game, either cooperators or non-cooperators, can try to participate in the game, a manager of the game can control a policy to permit who are ok. By changing the level of control, we investigated what type of policies is effective to maintain cooperation. Especially, we compared a strict policy of population management that only cooperative participants are permitted to entry it, and a simple policy of population management that non-cooperative participants are permitted. According to our simulation of the model, it is revealed surprisingly that a level of cooperation of the game collapses when a strict policy was adopted. On the other hand, cooperation level maintains high robustly if a tolerant policy that some invaders who are perfect defectors are permitted. We conclude that an existence of a few amount of defection has an effect on cooperation of the society. We call this effect a social vaccine effect.
We propose a method that uses a large number of digital photographs to produce highly accurate estimates of the locations of subjects that have attracted a crowd’s attention. Recently, a very active area of research has been to use humans as sensors in real-world observations that require a large amount of data. Some of these studies have attempted to produce real-time estimates of the subjects that are attracting a crowd’s attention by quickly collecting a large number of photographs. These studies are based on the assumption that, when photographers encounter interesting events, they take pictures. Some of the proposed methods realize high availability by using only photographing information, which includes information about location and azimuth of the camera and it is automatically embedded into photograph. Since this data is very small compared to that of the pixel information, the load on the communication infrastructure is reduced. However, there are problems with the accuracy when there are many attractive subjects in a small region, and they cannot be found with traditional methods that use a sequential search strategy. The proposed method overcomes this problem by applying nonnegative matrix factorization (NMF) to the estimated location of each subject. We verified the effectiveness of this by computational experiments and an experiment under a realistic environment.
Dental classification and numbering on posterior dental radiography are important tasks for forensic and biomedical applications. This paper proposed a novel method of classification and numbering on posterior dental radiography using Decimation-Free Directional Filter Bank (DDFB) and mesiodistal neck detection. The method was started by a segmentation method for decomposing dental image into directional images using DDFB. Detection of mesiodistal neck tooth separated the crown and the root of teeth. Finally we used support vector machine for classification and numbering. The experimental results achieved a classification accuracy rate of 91%. It approved the robustness of the proposed method for solving the problem of dental classification and numbering.
In many computer vision applications, nearest neighbor searching in high-dimensional spaces is often the most time consuming component and we have few algorithms for solving these high-dimensional nearest neighbor search problems that are faster than linear search. Approximately nearest neighbor search algorithms can play an important role in achieving significantly faster running times with relatively small errors. This paper considers the improvement of the PCA-tree nearest neighbor search algorithm  by employing nearest neighbor distance statistics. During the preprocessing phase of the PCA-tree nearest neighbor search algorithm, a data set is partitioned into clusters by successive use of Principal Component Analysis (PCA). The search performance is significantly improved if the data points are sorted by leaf node, and the threshold value is updated each time a smaller distance is found. The threshold is updated by the ε-approximate nearest neighbor approach together with the fixed-threshold approach. Performance can be further improved by the annulus bound approach. Moreover, nearest neighbor distance statistics is employed for further improving the efficiency of the search algorithm and the several experimental results are shown for demonstrating how its efficiency is improved.
The recent rise in maritime traffic volume, resulted from an increase in international marine trading volumes and the growing popularity of marine leisure activities, has increased the frequency of marine accidents. Vessels exhibiting abnormal navigation patterns (e.g., weaving in and out of courses or rotating in the same position) may have a serious impact on other vessels staying on normal courses. For this reason, ground VTS centers are keeping track of criminal vessels or damaged vessels in tandem with marine police. However, the number of available studies aimed at assisting the identification of seemingly apparent risk factors resulting from human errors has been next to nothing to date. In this respect, this study intends to implement an intelligent system that can identify vessels exhibiting abnormal navigation patterns based on fuzzy inference, in order to assist controllers and mates alike.
Recent advances in machine learning and computer vision have led to the development of several sophisticated learning schemes for object recognition by convolutional networks. One relatively simple learning rule, the Winner-Kill-Loser (WKL), was shown to be efficient at learning higher-order features in the neocognitron model when used in a written digit classification task. The WKL rule is one variant of incremental clustering procedures that adapt the number of cluster components to the input data. The WKL rule seeks to provide a complete, yet minimally redundant, covering of the input distribution. It is difficult to apply this approach directly to high-dimensional spaces since it leads to a dramatic explosion in the number of clustering components. In this work, a small generalization of the WKL rule is proposed to learn from high-dimensional data. We first show that the learning rule leads mostly to V1-like oriented cells when applied to natural images, suggesting that it captures second-order image statistics not unlike variants of Hebbian learning. We further embed the proposed learning rule into a convolutional network, specifically, the Neocognitron, and show its usefulness on a standard written digit recognition benchmark. Although the new learning rule leads to a small reduction in overall accuracy, this small reduction is accompanied by a major reduction in the number of coding nodes in the network. This in turn confirms that by learning statistical regularities rather than covering an entire input space, it may be possible to incrementally learn and retain most of the useful structure in the input distribution.
Recent improvements in embedded systems has enabled learning algorithms to provide realistic solutions for system identification problems. Existing learning algorithms, however, continue to have limitations in learning on embedded systems, where physical memory space is constrained. To overcome this problem, we propose a Limited General Regression Neural Network (LGRNN), which is a variation of general regression neural network proposed by Specht or of simplified fuzzy inference systems. The LGRNN continues incremental learning even if the number of instances exceeds the maximum number of kernels in the LGRNN. We demonstrate LGRNN advantages by comparing it to other kernel-based perceptron learning methods. We also propose a light-weighted LGRNN algorithm, -LGRNNLight- for reducing computational complexity. As an example of its application, we present a Maximum Power Point Tracking (MPPT) microconverter for photovoltaic power generation systems. MPPT is essential for improving the efficiency of renewable energy systems. Although various techniques exist that can realize MPPT, few techniques are able to realize quick control using conventional circuit design. The LGRNN enables the MPPT converter to be constructed at low cost using the conventional combination of a chopper circuit and microcomputer control. The LGRNN learns the Maximum Power Point (MPP) found by Perturb and Observe (P&O), and immediately sets the converter reference voltage after a sudden irradiation change. By using this strategy, the MPPT quickly responds without a predetermination of parameters. The experimental results suggest that, after learning, the proposed converter controls a chopper circuit within 14 ms after a sudden irradiation change. This rapid response property is suitable for efficient power generation, even under shadow flicker conditions that often occur in solar panels located near large wind turbines.