Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : May 27, 2020 - May 30, 2020
We developed a gesture classification algorithm that is installable to a small edge device. The algorithm is composed of neuro-inspired spatial and spatio-temporal feature extractors and the echo state network (ESN). The spatial feature extractor is based on the HMAX algorithm, which extracts scale-invariant features of line segments, and the spatio-temporal feature extractor was developed by combining the HMAX algorithm and the motion detection model of the visual cortex. The ESN, which can classify spatio-temporal data in spite of its simple three-layer architecture, was employed to avoid employing deep neural network, which is not suitable for small devices. We evaluated the proposed algorithm by classifying ten kinds of sign language. The results showed that the algorithm was successful in classifying the sign language.