Journal of Biomechanical Science and Engineering
Online ISSN : 1880-9863
ISSN-L : 1880-9863
Learning and visualization of features using MC-DCNN for gait training considering physical individual differences
Yusuke OSAWAKeiichi WATANUKIKazunori KAEDEKeiichi MURAMATSU
著者情報
ジャーナル フリー 早期公開

論文ID: 20-00337

この記事には本公開記事があります。
詳細
抄録

Several training methods have been developed to acquire motion information during real-time walking; these methods also feed the information back to the trainee. Trainees adjust their gait to ensure that the measured value approaches the target value, which may not always be suitable for each trainee. Therefore, we aim to develop a gait feedback training system that considers individual differences, classifies the gait of the trainee, and identifies adjustments for body parts and timing. A convolutional neural network (CNN) has a feature extraction function and is robust in terms of each feature position; therefore, it can be used to classify a gait as ideal or non-ideal. Additionally, when the gradient-weighted class activation mapping (Grad-CAM) is applied to the gait classification model, the output measures the influence degree contributed by the trainee’s each body part to the classification results. Thus, the trainee can visually determine the body parts that need to be adjusted through the use of the output. In this study, we focused on gaits related to stumbling. We measured the kinematics and kinetics data for participants and generated multivariate gait data, which were labeled as “gait rarely associated with stumbling” class or “gait frequently associated with stumbling” class using clustering with dynamic time warping. Next, the multichannel deep CNN (MC-DCNN) was used to learn the gait using the multivariate gait data and the corresponding classes. Finally, the data for verification were input into the MC-DCNN model, and we visualized the influence degrees of each place of the multivariate gait data for classification using Grad-CAM. The MC-DCNN model classified gaits with a high accuracy of 97.64±0.40%, and it learned the features that determine the thumb-to-ground distance. The output of the Grad-CAM indicated body parts, timing, and the relative strength of features that have an important effect on the thumb-to-ground distance.

著者関連情報
© 2020 by The Japan Society of Mechanical Engineers
feedback
Top