We introduce a real-time motion control system that uses the EtherCAT protocol and apply it to a manipulator with six degrees of freedom. The complexity of a multi-joint manipulator leads to higher requirements for synchronous and real-time performance. EtherCAT technology can greatly improve the performance in terms of accuracy, speed, capability, and band width in industrial control, which is crucial in our robot projects. In this paper, we discuss a servo motion control system based on EtherCAT using IgH as the open-source master station. A Linux operating system is adopted because of the advantages of open-source, high-efficiency, and high-stability operation as well as multi-platform support, which provide more flexibility, freedom, and extendability to developers. Considerable research has been conducted to explore EtherCAT technologies, completely implementing home-made codes with the aid of open-source libraries, debugging the master-slave communication process, and testing the resulting motion controller running on Linux or POSIX-compatible operating systems. To improve the real-time response of servo control, a real-time Xenomai kernel has been compiled, adopted, and tested, and it showed significant enhancement of the real time of a servo motion control system. Furthermore, we explore trajectory planning and inverse kinematic solutions. A trajectory planning method based on B-spline interpolation of three degrees, which makes each part of the trajectory planning curve have relative independence and continuity, is proposed for the kinematic trajectory planning problem in Cartesian space. A coordinate system is established using the modified D-H parameters method to obtain the inverse kinematics of the manipulator. The simulation and experimental results show that the calculation speed of inverse solutions is excellent and the motion of the manipulator is continuous and smooth.
This paper demonstrates the influence of institutional investors shareholding on stock liquidity by means of samples from the NEEQ market from 2014 to 2016. The results of this study suggest that, first, institutional investors shareholding will reduce the liquidity of stocks under agreement trading, and the higher the proportion of shareholding, the worse will become the liquidity. Second, shareholding by institutional investors significantly increases the liquidity of stocks under market-making trade. Third, owing to the function of class market making, the influence of private ownership on liquidity is more positive than that of other kinds of institutional shareholding.
Based on the panel data of R&D activities of the provincial high-tech industry in China from 1998 to 2014, this paper adopts the spatial weight matrix of different dimensions including geographical distance, technical distance, economic distance, proximity distance, and human capital distance, to construct a spatial econometric model to analyze the knowledge spillover effects of R&D activities through both local and transnational routes. The results show that in the case of spatial auto-correlation of the dependent variables, the results of the spatial panel model are more accurate and reliable than those obtained by the conventional panel model. The spatial coefficients of the spatial econometric model based on five different spatial weight matrices are all very significant, and there is a clear spatial correlation between the R&D activities of high-tech industries in different regions. Labor input and exports have a positive impact on innovation output, but the introduction of technology will hinder independent innovation in China’s high-tech industry, and the impact of capital investment to innovation output is uncertain, as it closely relates to the set of models. In addition, the space knowledge spillover effect through the local approach is larger than that produced by the transnational route.
Distribution prediction provides a complete description of forecasting uncertainty, which is of great significance to risk management. In this paper, the parametric method based on GARCH and the nonparametric method based on EWMA are both employed to model the conditional distributions of the SHCI and SZCI returns in Chinese stock market. As a result, the nonparametric method is better from the perspective of quantile evaluation. Furthermore, a simulated trading strategy based on time-varying quantiles is designed to analyze the trading yields of different levels of risk aversion. For the whole sample, compared with the buy-and-hold, SHCI has a higher profit in lower risk aversion and SZCI has higher profit only in a very narrow range when compared with the buy-and-hold strategy. In addition, the impact of IF:CSI 300 is considered. For the sub sample, before the emergence of IF:CSI 300, only a few investors with high risk aversion are able to achieve higher earnings from SHCI and there is hardly any opportunity for higher profit from SZCI. However, for the sub sample after the emergence of IF:CSI 300, many risk lovers and risk neutral investors have the opportunity to gain more profit than under the buy-and-hold strategy, both for SHCI and SZCI. The aforementioned conclusions imply that IF:CSI 300 may enhance Chinese stock market activity and liquidity and create more opportunities for investors who are not risk averse.
The Copula theory has been widely used in many scientific fields. In this study, we discuss model selection for the Copula theory. We derive the mixed Copula model based on the Kendall rank correlation coefficient. This new method is applied to forecast the value at risk of the portfolio in the foreign exchange market. The results show that the proposed mixed Copula model proposed in this paper is better than the other, commonly used, Copula model.
This study proposes a route prediction method using a self-organizing incremental neural network. The route trajectory is predicted from two location parameters (the latitude and longitude of the middle of a tropical storm) and the meteorological information (the atmospheric pressure). The method accurately predicted the normalized atmospheric pressure data of East Asia in the topological space of latitude and longitude, with low calculation cost. This paper explains the algorithms for training the self-organizing incremental neural network, the procedure for refining the datasets and the method for predicting the storm trajectory. The effectiveness of the proposed method was confirmed in experiments. With the results of experiments, possibility of prediction model improvement is discussed. Additionally, this paper explains the limitations of proposed method and brief solution to resolve. Although the proposed method was applied only to typhoon phenomena in the present study, it is potentially applicable to a wide range of global problems.
With the rapid development of the Internet, it is becoming more and more important to extract the relationship between the entity from the massive network text and then to build the knowledge graph or the knowledge base. In this paper, we focus on the research of the pattern representation in relation extraction, and extract the high accuracy Chinese entity pairs from large scale web texts. Past relation patterns only consider shallow lexical and syntax, not accurately and deeply express pattern context information, and do not consider keywords information. According to the new entity relation extraction technology and the characteristics of Chinese corpora, we define pattern representation based on keywords and word embedding information, extract deep semantic feature of context information, and strengthen keywords information effect for relation extraction. In addition, we propose a method for obtaining sentence keyword based on word embedding. In the experiment, we use Chinese Hudong Encyclopedia corpus to implement the character relation extraction system, and test the character relation extraction effect. The experimental results show that this method effectively improves the quality of the pattern, and obtains a favorable relation extraction performance.
At present, most of the dynamic sign language recognition is only for sign language words, the continuous sign language sentence recognition research and the corresponding results are less, because the segmentation of such sentence is very difficult. In this paper, a sign language sentence recognition algorithm is proposed based on weighted key-frames. Key-frames can be regarded as the basic unit of sign word, therefore, according to key frames we can get related vocabularies, and thus we can further organize these vocabularies into meaningful sentences. Such work can avoid the hard point of dividing sign language sentence directly. With the help of Kinect, i.e. motion-control device, a kind of self-adaptive algorithm of key-frame extraction based on the trajectory of sign language is brought out in the paper. After that, the key-frame is given weight according to its semantic contribution. Finally, the recognition algorithm is designed based on these weighted key-frames and thus get the continuous sign language sentence. Experiments show that the algorithm designed in this paper can realize real-time recognition of continuous sign language sentences.
Illumination estimation is an important research content in mixed reality technology. This paper presents a novel method for locating multiple point light sources and estimating their intensities from the images of a pair of reference spheres. In our approach, no prior knowledge of the location of the sphere is necessary, and the center of the sphere can be uniquely identified with the known radius. The sphere surface is assumed to have both Lambertian and specular properties instead of being a pure Lambertian or specular surface, which guarantees a higher accuracy than the existing approaches. The position estimations of multiple light sources are based on the fact that the specular reflection is highly dependent on highlights. One sphere is utilized to determine the directions of the light sources, and two spheres are used to locate the positions. The images of reference spheres are sampled and partitioned with multiple light sources in different positions. An illumination model is used to calculate the intensities of the ambient light and multiple light sources. Experiments on both simulation and synthetic images show that this method is feasible and accurate for estimating the positions and intensities of the multiple light sources.
An active fault-tolerant control scheme for a quadrotor unmanned aerial vehicle (UAV) with actuators faults is presented in this paper. The proposed scheme is based on model predictive control (MPC) and the discrete-time sliding mode observer. Considering the impact of disturbances on fault diagnosis, a discrete-time sliding mode observer with simple structure and strong robustness against the disturbances is designed to isolate the actuator faults and estimate the control effectiveness factors accurately. Using the fault diagnosis information, a model predictive active fault tolerant controller with embedded integrator is proposed to compensate parameter uncertainty and bounded disturbances in the realistic control system of the quadrotor. The advantages of the proposed control scheme are the ability of dealing with the control constraints, improving the fault-tolerant control precision and getting better real-time and anti-interference performance. The algorithm comparison experimental results on the quadrotor semi-physical simulation platform validate the feasibility and effectiveness of the proposed control scheme.
This paper examines the potential of personal values-based user modeling for long tail item recommendation. Long tail items are defined as those which are not popular but are preferred by small numbers of specific users. Although recommending long tail items to relevant users is beneficial for both the providers and consumers of such items, it is known to be a challenge for most recommendation algorithms. In particular, a long tail item is one that would be purchased and/or rated by a small number of users, so it is difficult to predict its rating accurately. This paper assumes that the influence of personal values becomes more obvious when users evaluate long tail items, and examines it through offline experiment. The Rating Matching Rate (RMRate) has been proposed in order to incorporate users’ personal values into recommender systems. As the RMRate models personal values as the weight of an item’s attribute, it is easy to incorporate into existing recommendation algorithms. An experiment was conducted to evaluate the performance of long tail item recommendation; Experimental result shows that personal values-based user modeling can recommend less popular items while maintaining precision.
This paper addresses the problem of cross-season visual place classification (VPC) from the novel perspective of long-term map learning. Our goal is to enable transfer learning efficiently from one season to the next, at a small constant cost, and without wasting the robot’s available long-term-memory by memorizing very large amounts of training data. To achieve a good tradeoff between generalization and specialization abilities, we employ an ensemble of deep convolutional neural network (DCN) classifiers and consider the task of scheduling (when and which classifiers to retrain), given a previous season’s DCN classifiers as the sole prior knowledge. We present a unified framework for retraining scheduling and we discuss practical implementation strategies. Furthermore, we address the task of partitioning a robot’s workspace into places to define place classes in an unsupervised manner, as opposed to using uniform partitioning, so as to maximize VPC performance. Experiments using the publicly available NCLT dataset revealed that retraining scheduling of a DCN classifier ensemble is crucial in achieving a good balance between generalization and specialization. Additionally, it was found that the performance is significantly improved when using planned scheduling.
Various applications of data analysis and their effects have been reported recently. With the remarkable progress in classification methods, one example being support vector machines, clustering as the main method of unsupervised classification has also been studied extensively. Consequently, fuzzy methods of clustering is becoming a standard technique. However, unsolved theoretical and methodological problems in fuzzy clustering remain and have to be studied more deeply. This issue collects five papers concerned with fuzzy clustering and related fields, and in all of them the main interest is methodology. Kondo and Kanzawa consider fuzzy clustering with a new objective function using q-divergence, which is a generalization of the well-known Kullback-Leibler divergence. Among different data types, they focus on categorical data. They also show the relations of different methods of fuzzy c-means. Thus, this study tends to further generalize methods of fuzzy clustering, trying to find the methodological boundaries of the capabilities of fuzzy clustering models. Kitajima, Endo, and Hamasuna propose a method of controlling cluster sizes so that the resulting clusters have an even size, which is different from the optimizing of cluster sizes dealt with in other studies. This technique enhances application fields of clustering in which cluster sizes are more important than cluster shapes. Hamasuna et al. study the validity measures of clusters for network data. Cluster validity measures are generally proposed for points in Euclidean spaces, but the authors consider the application of validity measures to network data. Several validity measures are modified and adapted to network data, and their effectiveness is examined using simple network examples. Ubukata et al. propose a new method of c-means related to rough sets, a method based on a different idea from well-known rough c-means by Lingras. Finally, Kusunoki, Wakou, and Tatsumi study the maximum margin model for the nearest prototype classifier that leads to the optimization of the difference of convex functions. All papers include methodologically important ideas that have to be further investigated and applied to real-world problems.
This paper presents two fuzzy clustering algorithms for categorical multivariate data based on q-divergence. First, this study shows that a conventional method for vectorial data can be explained as regularizing another conventional method using q-divergence. Second, based on the known results that Kullback-Leibler (KL)-divergence is generalized into the q-divergence, and two conventional fuzzy clustering methods for categorical multivariate data adopt KL-divergence, two fuzzy clustering algorithms for categorical multivariate data that are based on q-divergence are derived from two optimization problems built by extending the KL-divergence in these conventional methods to the q-divergence. Through numerical experiments using real datasets, the proposed methods outperform the conventional methods in term of clustering accuracy.
Clustering is a method of data analysis without the use of supervised data. Even-sized clustering based on optimization (ECBO) is a clustering algorithm that focuses on cluster size with the constraints that cluster sizes must be the same. However, this constraints makes ECBO inconvenient to apply in cases where a certain margin of cluster size is allowed. It is believed that this issue can be overcome by applying a fuzzy clustering method. Fuzzy clustering can represent the membership of data to clusters more flexible. In this paper, we propose a new even-sized clustering algorithm based on fuzzy clustering and verify its effectiveness through numerical examples.
Modularity is one of the evaluation measures for network partitions and is used as the merging criterion in the Louvain method. To construct useful cluster validity measures and clustering methods for network data, network cluster validity measures are proposed based on the traditional indices. The effectiveness of the proposed measures are compared and applied to determine the optimal number of clusters. The network cluster partitions of various network data which are generated from the Polaris dataset are obtained by k-medoids with Dijkstra’s algorithm and evaluated by the proposed measures as well as the modularity. Our numerical experiments show that the Dunn’s index and the Xie-Beni’s index-based measures are effective for network partitions compared to other indices.
Hard C-means (HCM), which is one of the most popular clustering techniques, has been extended by using soft computing approaches such as fuzzy theory and rough set theory. Fuzzy C-means (FCM) and rough C-means (RCM) are respectively fuzzy and rough set extensions of HCM. RCM can detect the positive and the possible regions of clusters by using the lower and the upper areas which are respectively analogous to the lower and the upper approximations in rough set theory. RCM-type methods have the problem that the original definitions of the lower and the upper approximations are not actually used. In this paper, rough set C-means (RSCM), which is an extension of HCM based on the original rough set definition, is proposed as a rough set-based counterpart of RCM. Specifically, RSCM is proposed as a clustering model on an approximation space considering a space granulated by a binary relation and uses the lower and the upper approximations of temporal clusters. For this study, we investigated the characteristics of the proposed RSCM through basic considerations, visual demonstrations, and comparative experiments. We observed the geometric characteristics of the examined methods by using visualizations and numerical experiments conducted for the problem of classifying patients as having benign or malignant disease based on a medical dataset. We compared the classification performance by viewing the trade-off between the classification accuracy in the positive region and the fraction of objects classified as being in the positive region.
In this paper, we study nearest prototype classifiers, which classify data instances into the classes to which their nearest prototypes belong. We propose a maximum-margin model for nearest prototype classifiers. To provide the margin, we define a class-wise discriminant function for instances by the negatives of distances of their nearest prototypes of the class. Then, we define the margin by the minimum of differences between the discriminant function values of instances with respect to the classes they belong to and the values of the other classes. The optimization problem corresponding to the maximum-margin model is a difference of convex functions (DC) program. It is solved using a DC algorithm, which is a k-means-like algorithm, i.e., the members and positions of prototypes are alternately optimized. Through a numerical study, we analyze the effects of hyperparameters of the maximum-margin model, especially considering the classification performance.