Host: The Japanese Society for Artificial Intelligence
Name : The 35th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 35
Location : [in Japanese]
Date : June 08, 2021 - June 11, 2021
We propose an open-ended multimodal video question answering (VideoQA) method that simultaneously takes motion, appearance, and audio signals as input and then outputs textual answers. Although audio information is useful for understanding video content along with visual one, standard open-ended VideoQA methods exploit only the motion-appearance signals and ignore the audio one. Moreover, due to the lack of fine-grained modeling multimodality data and effective fusing them, a few prior works using motion, visual appearance, and audio signals showed poor results on public benchmarks. To address these problems, we propose multi-stream 3-dimensional convolutional networks (3D ConvNets) modulated with textual conditioning information. Our model integrates the fine-grained motion-appearance and audio information to the multiple 3D ConvNets and then modulates their intermediate representation using question-guided spatiotemporal information. Experimental results on public open-ended VideoQA datasets with audio track show our VideoQA method by effectively combines motion, appearance, and audio signals and outperformed state-of-the-art methods.