In recent years, various user interfaces have been developed to meet the diverse needs of physically disabled persons. In this paper, we developed a method to identify gazing and facial movements based on gaze time and eye/face information and developed an electric wheelchair that can be operated with the user’s "natural gazing and facial movements. This intention estimation model is composed of 1DCNN and LSTM layers. First, 1DCNN is used to extract features from gaze and face information, and then the extracted features are input to LSTM to estimate the user’s intentions regarding movement. The evaluation experiments suggest that the combination of gaze and face information improves the estimation accuracy and contributes to the classification. Furthermore, it was confirmed that adding a convolutional filter layer to the LSTM layer improved the accuracy.
J-STAGEがリニューアルされました! https://www.jstage.jst.go.jp/browse/-char/ja/