2026 年 22 巻 4 号 p. 236-241
Advances in deep learning over the past decade have evolved neural networks, which were simple pattern recognition models, into gigantic language models capable of solving complex intellectual tasks. In the first half of this manuscript, we will explain a deep learning model called a transformer, which is essential for realizing large-scale machine learning. In the second half, we will discuss the potential applications of machine learning to the natural sciences, citing several examples. We will consider from several perspectives whether deep learning, which is said to be data-driven and highly black-box modeling, can make a fundamental contribution to science.