A heuristic algorithm named the leader of dolphin herd algorithm (LDHA) is proposed in this paper to solve an optimization problem whose dimensionality is not high, with dolphins that imitate predatory behavior. LDHA is based on a leadership strategy. Using the leadership strategy as reference, we have designed the proposed algorithm by simulating the preying actions of dolphin herds. Several intelligent behaviors, such as “producing leaders,” “group gathering,” “information sharing,” and “rounding up prey,” are abstracted by LDHA. The proposed algorithm is tested on 15 typical complex function optimization problems. The testing results reveal that compared with the particle swarm optimization and the genetic algorithms, LDHA has relatively high optimization accuracy and capability for complex functions. Further, it is almost unaffected by the inimicality, multimodality, or dimensions of functions in the function optimization section, which implies better convergence. In addition, ultra-high-dimensional function optimization capabilities of this algorithm were tested using the IEEE CEC 2013 global optimization benchmark. Unfortunately, the proposed optimization algorithm has a limitation in that it is not suitable for ultra-high-dimensional functions.
In this paper, a dynamic stochastic general equilibrium model with price stickiness is constructed to analyze quantitatively the effect of interest rate liberalization on economic structure and monetary policy. Using parameter calibration and Bayes estimation, we analyze the impulse responses and numerical simulation of the external shocks of technology shocks and monetary policy shock. The empirical results find the following conclusion: Firstly, the interest rate liberalization is conducive to economic restructuring as the investment ratio and capital growth is suppressed and the household and government consumption ratio is promoted. Secondly, the interest rate liberalization can lower economic fluctuation, and enhance the defense ability against external shocks such as technological shocks and monetary policy shocks. Moreover, the interest rate liberalization is help to dredge the monetary policy transmission channels as the interest rate shocks on the real economy is gradually increased.
A first-order approximation technique is not suited to handle issues such as welfare comparison, time-varying variance. Following Schmitt-Grohe and Uribe , in this paper, we derive a second-order approximation to estimate the dynamic stochastic equilibrium model with stochastic volatility, to capture the different impacts of the level shocks and the volatility shocks. Furthermore, the paper presents an application of standard quantitative New Keynesian business cycle model, and the results shows the negative effects of stochastic volatility shocks. Furthermore, the paper presents an application of standard quantitative New Keynesian business cycle model, and the empirical results find that the level shocks have positive effects on consumption, investment and output, while the volatility shocks have negative effects on consumption, investment and output.
We propose to handle the complexity of utility spaces used in multi-issue negotiation by adopting a new representation that allows a modular decomposition of the issues and the constraints. This is based on the idea that a constraint-based utility space is nonlinear with respect to issues, but linear with respect to the constraints. This allows us to rigorously map the utility space into an issue-constraint hyper-graph. Exploring the utility space reduces then to a message passing mechanism along the hyper-edges of the hyper-graph by means of utility propagation. Optimal contracts are found efficiently using a variation of the Max-Sum algorithm. We evaluate the model experimentally using parameterized nonlinear utility spaces, showing that it can handle a large family of complex utility spaces by finding optimal contracts, outperforming previous sampling-based approaches. We also evaluate the model in a negotiation setting. We show that under high complexity, social welfare could be greater than the sum of the individual agents’ best utilities.
In exploring the 1-to-N map matching problem that exploits a compact map data description, we hope to improve map matching scalability used in robot vision tasks. We propose explicitly targeting fast succinct map matching, which consists of map matching subtasks alone. These tasks include offline map matching attempts to find compact part-based scene models that effectively explain individual maps by using fewer larger parts. These tasks also include online map matching to find correspondence between part-based maps efficiently. Our part-based scene modeling approach is unsupervised and uses common pattern discovery (CPD) between input and known reference maps. Results of our experiments, which use a publicly available radish dataset, confirm the effectiveness of our proposed approach.
Two main issues arise in practical imitation learning by humanoid robots observing human behavior – the first is segmenting and recognizing motion demonstrated naturally by a human beings and the second is utilizing the demonstrated motion for imitation learning. Specifically, the first involves motion segmentation and recognition based on the humanoid robot motion repertoire for imitation learning and the second introduces learning bias based on demonstrated motion in the humanoid robot’s imitation learning to walk. We show the validity of our motion segmentation and recognition in a practical way and report the results of our investigation in the influence of learning bias in humanoid robot simulations.
Signs are ubiquitous indoors and outdoors, and they are often used for finding public places and other locations. However, information on signs is inaccessible to many visually impaired people, unless represented non-visually such as with Braille, tactile graphics, or speech. Automatically reading text from signs in natural scene images is a vital application for assisting visually impaired people. However, finding text in scene images is a great challenge because it cannot be assumed that the acquired image contains only characters. Natural scene images usually contain diverse text in different sizes, styles, fonts, and colors, and complex backgrounds. Therefore, we turn to the development of a portable camera-based assistive system to aid visually impaired people reading text from natural scenery. In this paper, a new method for character string extraction from scene images is discussed. The algorithm is implemented and evaluated using a set of natural scene images. Accuracy, precision, and recall rates of the proposed method are calculated and analyzed to determine success and limitations. Recommendations for improvements are given based on the results.
As the number of computer systems connected to the Internet is increasing exponentially, the computer security has become a crucial problem, and many techniques for Intrusion detection have been proposed to detect network attacks efficiently. On the other hand, data mining algorithms based on Genetic Network Programming (GNP) have been proposed and applied to Intrusion detection recently. GNP is a graph-based evolutionary algorithm and can extract many important class association rules by making use of the distinguished representation ability of the graph structure. In this paper, probabilistic classification algorithms based on multi-dimensional probability distribution are proposed and combined with conventional class association rule mining of GNP, and applied to network intrusion detection for the performance evaluation. The proposed classification algorithms are based on 1) one-dimensional probability density functions and 2) a two-dimensional joint probability density function. These functions represent the distribution of normal and intrusion accesses and efficiently classify a new access data into normal, known intrusion or even unknown intrusion. The simulations using KDD99Cup database from MIT Lincoln Laboratory show some advantages of the proposed algorithms over the conventional mean and standard deviation-based method.