2021 Volume 50 Issue 2 Pages 257-283
In this paper, we discuss a theory of generalization error, which explains a principle of deep learning. Deep learning is a statistical methodology which utilizes multi-layer neural networks as models, and has been in the spotlight owing to its high performance. Despite its empirical success, due to a complicated structure of the multi-layer models, understanding of the mechanism of its high performance is still a developing problem. This paper focuses on several attempts to describe the principles of the performance, with a particular focus on an approximation error by neural network models and a complexity error from the learning procedure. Further, we discusses several elucidated and unexplained parts of the principal of deep learning.