Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
35th (2021)
Session ID : 2Yin5-05
Conference information

Open-ended Video Question Answering with Multi-stream 3D Convolutional Networks
*Taiki Miyanishi MIYANISHIMotoaki KAWANABE
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

We propose an open-ended multimodal video question answering (VideoQA) method that simultaneously takes motion, appearance, and audio signals as input and then outputs textual answers. Although audio information is useful for understanding video content along with visual one, standard open-ended VideoQA methods exploit only the motion-appearance signals and ignore the audio one. Moreover, due to the lack of fine-grained modeling multimodality data and effective fusing them, a few prior works using motion, visual appearance, and audio signals showed poor results on public benchmarks. To address these problems, we propose multi-stream 3-dimensional convolutional networks (3D ConvNets) modulated with textual conditioning information. Our model integrates the fine-grained motion-appearance and audio information to the multiple 3D ConvNets and then modulates their intermediate representation using question-guided spatiotemporal information. Experimental results on public open-ended VideoQA datasets with audio track show our VideoQA method by effectively combines motion, appearance, and audio signals and outperformed state-of-the-art methods.

Content from these authors
© 2021 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top