Although sign language is one of the most general communication methods used by hearing impaired people, there is a high barrier to communication between impaired people and hearing people, because many of hearing people do not understand sign language. Therefore, if an automatic translation of sign language can be realized, it will contribute to facilitation of communication between them. The authors are now working on realizing sign language translation using an optical camera and a CPU embedded in smartphones as a final goal. In this paper, the authors examine sign language recognition method using the colored glove and the optical camera, and extract six kinds of feature elements for classification from the position of the center of gravity of the colored region of the colored glove and the area of this region. The feature elements are applied to Hidden Markov Model (HMM), Support Vector Machine (SVM), Discriminant Analysis (DA), Linear Classification Model (LCM), k-Nearest Neighbor algorithm (k-NN) and Decision Tree (DT) for classification of each sign language motion. We evaluate the performance of each classifier and propose a method to combine these classification results with the aim of realizing highly accurate recognition. It is shown that recognition performance of 73.1% by a single method, and 76.8% by a combination method, for 35 sign language words can be obtained by the proposed method.
View full abstract