人工知能学会全国大会論文集
Online ISSN : 2758-7347
33rd (2019)
セッションID: 2H4-E-2-05
会議情報

Reducing the Number of Multiplications in Convolutional Recurrent Neural Networks (ConvRNNs)
*Daria VAZHENINAAtsunori KANEMURA
著者情報
会議録・要旨集 フリー

詳細
抄録

Convolutional variants of recurrent neural networks, ConvRNNs, are widely used for spatio-temporal modeling. Although ConvRNNs are suited to model two-dimensional sequences, the introduction of convolution operation brings additional parameters and increases the computational complexity. The computation load can be obstacles in putting ConvRNNs in operation in real-world applications. We propose to reduce the number of parameters and multiplications by substituting some convolutiona operations with the Hadamard product. We evaluate our proposal using the task of next video frame prediction and the Moving MNIST dataset. The proposed method requires 38% less multiplications and 21% less parameters compared to the fully convolutional counterpart. In price of the reduced computational complexity, the performance measured by for structural similarity index measure (SSIM) decreased about 1.5%. ConvRNNs with reduced computations can be used in more various situations likein web apps or embedded systems.

著者関連情報
© 2019 The Japanese Society for Artificial Intelligence
前の記事 次の記事
feedback
Top