An image encryption scheme that combines a hyperchaotic system with standard weighted fractional Fourier transform theory is proposed. Simulation results showed that grayscale distribution of the encrypted image was balanced, correlation coefficients of adjacent pixels were highly irrelevant, and the encrypted image was highly sensitive to the secret key, offering good robustness against attacks and a larger key space.
The brushless DC motor (BLDCM) speed control system has various kinds of uncertainties, such as reference speed mutation, noise and parameters change, etc. However, proportional integral (PI) control method used widely cannot handle the uncertainties in the control system well. A novel discrete adaptive control with Multiple-Step-Guess (MSG) estimation for BLDCM speed control system is proposed in this contribution. MSG estimation is firstly developed and applied in BLDCM speed control system, which estimate the BLDCM model parameters online with only five steps history information sampled from the input signal and output signal. The tracking adaptive control law is designed to ensure the speed can track reference speed rapidly and accurately. Compared with PI control and recursive least square adaptive control (RLSAC), extensive simulations verify that the BLDCM speed response under MSG adaptive control (MSGAC) has better dynamic and steady state performance in the case of reference speed mutation and BLDCM parameters change. Simulation results illustrate that the novel proposed method is effective and robust for uncertainties of BLDCM speed control system.
The study of the relationship between the concentration of PM2.5 and the local air quality index (AQI) is significant for the improvement of urban air quality. This study not only considered multifractal cross-correlation but also the fluctuation conduction mechanism. An asymmetric multifractal detrended cross-correlation analysis (MF-DCCA) method based on fluctuation conduction is introduced here to empirically explore the causality and conduction time between air quality factors and PM2.5 concentration. The empirical results indicate the existence of a bidirectional fluctuation conduction effect between PM2.5 and PM10, SO2, and NO2 in Hangzhou, China, with a conduction time of 30 hours; this effect is non-existent between PM2.5 and O3. In addition, there is a unidirectional fractal fluctuation conduction between PM2.5 and CO with a conduction time of 21 hours.
The output elasticity estimated by the traditional Cobb-Douglas production function is a fixed constant that describes developed countries with relative stable factor shares well. While the fixed constant fails to describe developing countries with changing factor shares during economic transitional periods, such as China. In this paper, we construct a time-varying elasticity production function model and extend the Cobb-Douglas production function to a time-varying elasticity Cobb-Douglas production function. The semi-parametric varying-coefficient quantile model, together with the local polynomial and the two-phase methods, is used for the estimation of the time-varying elasticity of the capital coefficient and the labor force. Empirical research on Chinese economic growth shows that the time-varying elasticity of capital is declining and the time-varying elasticity of the labor force is increasing gradually.
A neural network based online anthropomorphic performance decision-making approach is described for a dual-arm dulcimer playing robot. Because it is difficult to extract experiential rules manually to describe the decision behavior of a human playing a dulcimer, the proposed method relies on the self-learning function of a artificial neural network (ANN). The training data of the network consists of three types of information: the note pitch of adjacent notes, time interval in a piece of music, and decision results in actual performance processes of human beings. A decision-making approach, devised through combining the well-trained ANN with music for which performance decisions were required, is then applied. The numerical results show that, for several pieces of music with different characteristics, the accuracy and precision of the decision results are always relatively high, which verifies the practicability and good generalizability of the method.
Chaotic systems have gathered much attention. When the OGY method is applied to control a chaotic system, chaos can be contained and target signals can be traced with satisfactory accuracy. However, the traditional control method have a low convergence speed, which may hamper the performance of the whole system. To solve this problem, the cuckoo search algorithm is used to guide the orbits of chaotic systems. Moreover, the OGY method is improved so that a chaotic system can be stabilized for different target points. Finally, the effectiveness of the proposed method is verified through several typical chaotic systems. The simulation results indicate that the modified method has a faster convergence speed and yields better performance than the traditional OGY control method.
Blockchain – a distributed and public database of transactions – has become a platform for decentralized applications. Despite its increasing popularity, blockchain technology faces a scalability problem: the throughput does not scale with the increasing network size. Thus, in this paper, we propose a scalable blockchain protocol to solve the scalability problem. The proposed method was designed based on a proof of stake (PoS) consensus protocol and a sharding protocol. Instead of transactions being processed by the whole network, the sharding protocol is employed to divide unconfirmed transactions into transaction shards and to divide the network into network shards. The network shards process the transaction shards in parallel to produce middle blocks. Middle blocks are then combined into a final BLOCK in a timestamp recorded on the blockchain. Experiments were performed in a simulation network consisting of 100 Amazon EC2 instances. The latency of the proposed method was approximately 27 s and the maximum throughput achieved was 36 transactions per second for a network containing 100 nodes. The results of the experiments indicate that the throughput of the proposed protocol increases with the network size. This confirms the scalability of the proposed protocol.
The traditional collaborative filtering model suffers from high-dimensional sparse user rating information and ignores user preference information contained in user reviews. To address the problem, this paper proposes a new collaborative filtering model UL_SAM (UBCF_LDA_SIMILAR_ADD_MEAN) which integrates topic model with user-based collaborative filtering model. UL_SAM extracts user preference information from user reviews through topic model and then fuses user preference information with user rating information by similarity fusion method to create fusion information. UL_SAM creates collaborative filtering recommendations according to fusion information. It is the advantage of UL_SAM on improving recommendation effectiveness that UL_SAM enriches information for collaborative recommendation by integrating user preference with user rating information. Experimental results of two public datasets demonstrate significant improvement on recommendation effectiveness in our model.
From the perspective of preventive care, a monitoring function that detects a decline in cognitive function would be useful as an information and communications technology (ICT) based service for watching over elderly people. We developed a system that evaluates cognitive functioning by simultaneously measuring dual tasks using a tablet computer. The tasks comprised a spiral drawing task and a color change counting task. The objective of this research is feature extraction of mild cognitive impairment (MCI) using this system. To do so, we compared the results of dual task tests for three participant groups: elderly people with suspected MCI, healthy elderly people, and healthy young people. The analyses were based on the amount of time required for drawing each section and the drawing velocity. The results indicate a significant difference between the MCI elders and the other two groups regarding the amount of time required for drawing the section close to the center of the spiral if the difficulty of the test’s sub-task is adjusted.
In human-machine interaction, facial emotion recognition plays an important role in recognizing the psychological state of humans. In this study, we propose a novel emotion recognition framework based on using a knowledge transfer approach to capture features and employ an improved deep forest model to determine the final emotion types. The structure of a very deep convolutional network is learned from ImageNet and is utilized to extract face and emotion features from other data sets, solving the problem of insufficiently labeled samples. Then, these features are input into a classifier called multi-composition deep forest, which consists of 16 types of forests for facial emotion recognition, to enhance the diversity of the framework. The proposed method does not need require to train a network with a complex structure, and the decision tree-based classifier can achieve accurate results with very few parameters, making it easier to implement, train, and apply in practice. Moreover, the classifier can adaptively decide its model complexity without iteratively updating parameters. The experimental results for two emotion recognition problems demonstrate the superiority of the proposed method over several well-known methods in facial emotion recognition.
To perform a complexity evaluation for an electromagnetic environment (EME), a new method based on the S-transform is proposed, which can simultaneously count the time occupancy, frequency occupancy, and energy occupancy in the time–frequency domain. The frequency coincidence, modulation similarity, and background noise intensity are selected as important evaluation indices, and their physical interpretations are analyzed and calculated. The Extreme Learning Machine (ELM) method is adopted to evaluate the environmental complexity. The proposed method (S-ELM) requires less training time and has a fast convergence rate. The simulation and experimental results confirm that the proposed method is accurate and efficient.
In the area of network development, especially cloud computing, security has been a long-standing issue. In order to better utilize physical resources, cloud service providers usually allocate different tenants on the same physical machine, i.e., physical resources such as CPU, memory, and network devices are shared among multiple tenants on the same host. Virtual machine (VM) co-resident attack, a serious threat in this sharing methodology, includes malicious tenants who tend to steal private data. Currently, most solutions focus on how to eliminate known specific side channels, but they have little effect on unknown side channels. Compared to eliminating side channels, developing a VM allocation strategy is an effective countermeasure against VM co-resident attack as it reduces the probability of VM co-residency, but research on this topic is still in its infancy. In this study, firstly, a novel, efficient, and secure VM allocation strategy named Against VM Co-resident attack based on Multi-objective Optimization Best Fit Decreasing (AC-MOBFD) is proposed, which simultaneously optimizes load balancing, energy consumption, and host resource utilization during VM placement. Subsequently, security of the proposed allocation strategy is measured using two metrics – VM attack efficiency and VM attack coverage. Extensive experiments on simulated and real cloud platforms, CloudSim and OpenStack, respectively, demonstrate that using our strategy, the attack efficiency of VM co-residency is reduced by 37.3% and VM coverage rate is reduced by 24.4% when compared to existing strategies. Finally, we compare the number of co-resident hosts with that of hosts in a real cloud platform. Experimental results show that the deviation is below 9.4%, which validates the feasibility and effectiveness of the presented strategy.
Unmanned aerial vehicles, more typically known as drones are flying aircrafts that do not have a pilot onboard. For drones to fly through an area without GPS signals, developing scene understanding algorithms to assist in autonomous navigation will be useful. In this paper, various thresholding algorithms are evaluated to enhance scene understanding in addition to object detection. Based on the results obtained, Gaussian filter global thresholding can segment regions of interest in the scene effectively and provide the least cost of processing time.
In this study, the use of a popular deep reinforcement learning algorithm – deep Q-learning – in developing end-to-end control policies for robotic swarms is explored. Robots only have limited local sensory capabilities; however, in a swarm, they can accomplish collective tasks beyond the capability of a single robot. Compared with most automatic design approaches proposed so far, which belong to the field of evolutionary robotics, deep reinforcement learning techniques provide two advantages: (i) they enable researchers to develop control policies in an end-to-end fashion; and (ii) they require fewer computation resources, especially when the control policy to be developed has a large parameter space. The proposed approach is evaluated in a round-trip task, where the robots are required to travel between two destinations as much as possible. Simulation results show that the proposed approach can learn control policies directly from high-dimensional raw camera pixel inputs for robotic swarms.
Ant colony optimization (ACO) algorithms have been successfully applied to data classification problems that aim at discovering a list of classification rules. However, on the one hand, the ACO algorithm has defects including long search times and convergence issues with non-optimal solutions. On the other hand, given bottlenecks such as memory restrictions, time complexity, or data complexity, it is too hard to solve a problem when its scale becomes too large. One solution for this issue is to design a highly parallelized learning algorithm. The MapReduce programming model has quickly emerged as the most common model for executing simple algorithmic tasks over huge volumes of data, since it is simple, highly abstract, and efficient. Therefore, MapReduce-based ACO has been researched extensively. However, due to its unidirectional communication model and the inherent lack of support for iterative execution, ACO algorithms cannot easily be implemented on MapReduce. In this paper, a novel classification rule discovery algorithm is proposed, namely MR-AntMiner, which can capitalize on the benefits of the MapReduce model. In order to construct quality rules with fewer iterations as well as less communication between different nodes to share the parameters used by each ant, our algorithm splits the training data into some subsets that are randomly mapped to different mappers; then the traditional ACO algorithm is run on each mapper to gain the local best rule set, and the global best rule list is produced in the reducer phase according to a voting mechanism. The performance of our algorithm was studied experimentally on 14 publicly available data sets and further compared to several state-of-the-art classification approaches in terms of accuracy. The experimental results show that the predictive accuracy obtained by our algorithm is statistically higher than that of the compared targets. Furthermore, experimental studies show the feasibility and the good performance of the proposed parallelized MR-AntMiner algorithm.
The heterogeneity of multimodal data is the main challenge in cross-media retrieval; many methods have already been developed to address the problem. At present, subspace learning is one of the mainstream approaches for cross-media retrieval; its aim is to learn a latent shared subspace so that similarities within cross-modal data can be measured in this subspace. However, most existing subspace learning algorithms only focus on supervised information, using labeled data for training to obtain one pair of mapping matrices. In this paper, we propose joint graph regularization based on semi-supervised learning cross-media retrieval (JGRHS), which makes full use of labeled and unlabeled data. We jointly considered correlation analysis and semantic information when learning projection matrices to maintain the closeness of pairwise data and semantic consistency; graph regularization is used to make learned transformation consistent with similarity constraints in both modalities. In addition, the retrieval results on three datasets indicate that the proposed method achieves good efficiency in theoretical research and practical applications.
This paper presents an efficient method to build a corpus to train natural language understanding (NLU) modules. Conventional corpus creation methods involve a common cycle: a subject is given a specific situation where the subject operates a device by voice, and then the subject speaks one utterance to execute the task. In these methods, many subjects are required in order to build a large-scale corpus, which causes a problem of increasing lead time and financial cost. To solve this problem, we propose to incorporate a “probing question” into the cycle. Specifically, after a subject speaks one utterance, the subject is asked to think of alternative utterances to execute the same task. In this way, we obtain many utterances from a small number of subjects. An evaluation of the proposed method applied to interview-based corpus creation shows that the proposed method reduces the number of subjects by 41% while maintaining morphological diversity in a corpus and morphological coverage for user utterances spoken to commercial devices. It also shows that the proposed method reduces the total time for interviewing subjects by 36% compared with the conventional method. We conclude that the proposed method can be used to build a useful corpus while reducing lead time and financial cost.
We propose a parallel algorithm for mining non-redundant recurrent rules from a sequence database. Recurrent rules, proposed by Lo et al. , can express “Whenever a series of precedent events occurs, eventually a series of consequent events occurs,” and they have shown the usefulness of recurrent rules in various domains, including software specification and verification. Although some algorithms such as NR3 have been proposed, mining non-redundant recurrent rules still requires considerable processing time. To reduce the computation cost, we present a parallel approach to mining non-redundant recurrent rules, which fully utilizes the task-parallelism in NR3. We also give some experimental results, which show the effectiveness of our proposed method.