Journal of the Japan Society for Precision Engineering
Online ISSN : 1882-675X
Print ISSN : 0912-0289
ISSN-L : 0912-0289
Paper
Temporal Cross-Modal Attention for Audio-Visual Event Localization
Yoshiki NAGASAKIMasaki HAYASHINaoshi KANEKOYoshimitsu AOKI
Author information
JOURNAL FREE ACCESS

2022 Volume 88 Issue 3 Pages 263-268

Details
Abstract

In this paper, we propose a new method for audio-visual event localization 1) to find the corresponding segment between audio and visual event. While previous methods use Long Short-Term Memory (LSTM) networks to extract temporal features, recurrent neural networks like LSTM are not able to precisely learn long-term features. Thus, we propose a Temporal Cross-Modal Attention (TCMA) module, which extract temporal features more precisely from the two modalities. Inspired by the success of attention modules in capturing long-term features, we introduce TCMA, which incorporates self-attention. Finally, we were able to localize audio-visual event precisely and achieved a higher accuracy than the previous works.

Content from these authors
© 2022 The Japan Society for Precision Engineering
Previous article Next article
feedback
Top