2019 Volume 10 Issue 1 Pages 28-44
By virtue of recent developments in machine learning techniques, higher-level information can now to be extracted from massive data. In this paper, we focus on extracting multiple semantic relations, using light-weight processing through the efficient low-dimensional expression of substrings in text data. We propose an approach to build features for relational classification consisted of only the low-dimensional vectors representing substrings between words called substring vectors [1]. In addition, we investigate the relationship between the numbers of dimensions and the obtained accuracies when nonlinear classifiers are applied. The experimental results show that, with simple features and small computational cost, our approach using relatively low-dimensional representations achieves a sufficiently high accuracy that is better than most existing approaches.