Host: The Japanese Society for Artificial Intelligence
Name : The 37th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 37
Location : [in Japanese]
Date : June 06, 2023 - June 09, 2023
The main reason why deep learning works well on real-world data is that it is designed to preserve known mathematical structures of target systems, rather than that it is a universal approximator. For example, convolutional neural networks are designed to be translation invariant (i.e., symmetric to translation), which means that the extracted features does not depend on the position of the object in the image. A similar goes for graph neural networks, which is permutation invariant. Neural networks with symmetry are called geometric deep learning in recent years, and are being interpreted as natural transformations in category theory. I demonstrate that neural networks that learn the dynamics of physical phenomena, deep physical models, can also be interpreted as natural transformations. Likewise, I introduce deep learning designed based on mathematical structures and discuss its interpretation from the viewpoint of category theory.