2024 Volume 32 Pages 641-651
The classification of encoded frame level descriptors is a common approach to video event recognition. However, the structural incorporation of visual temporal clues to the encoding process is often ignored, resulting in reduced recognition accuracy. In this paper, a spatio-temporal video encoding method is proposed that improves the accuracy of video event recognition. By fine-tuning the Convolutional Neural Network (CNN) concept score extractors, pre-trained on ImageNet, frame level descriptors are computed. The descriptors are then encoded and normalized and are fed to the classifier to discriminate between the video events. The main contribution here is to use the temporal dimension of video signals to construct a spatio-temporal vector of locally aggregated descriptors (VLAD) encoding scheme. The proposed encoding is shown to be in the form of a non-convex constrained optimization problem with ℓ0 norm terms, which is transformed, by a Gaussian approximation, into a smoothed version. This makes the cost function differentiable and overcomes the non-smoothness challenge. Compared to the state-of-the-art video event recognition schemes, the proposed method achieves comparable performance as examined over two public datasets: Columbia consumer video (CCV) and Kinetics-400.