抄録
In this paper, we propose a method of human action recognition for videos in which actions are continuously transitioning. First, we make pose estimator which has learned joint coordinates using Convolutional Neural Networks (CNN) and extract feature from intermediate structure of it. Second, we train action recognizer structured by Long Short-Term Memory (LSTM), using pose feature and environmental feature as inputs. At that time, we propose Pose-Centric Learning. In addition, from pose feature we calculate Attention that represents importance of environmental feature for each element, and filtering latter feature by Attention to make this effective one. When modeling action recognizer, we structure Hierarchical model of LSTM. In experiments, we evaluated our method comparing to conventional method and achieve 15.7% improvement from it on challenging action recognition dataset.