Characterization of the typical deep learning solutions is crucial to understanding and controlling deep learning. Due to the complex structure of real deep neural networks (NNs), various simplified mathematical models are employed in conventional theoretical analysis. In this study, we describe a mathematical model of a single hidden layer in an NN, which is an integral representation of NNs, and its right inverse operator (or analysis operator), the ridgelet transform. Furthermore, while the classical ridgelet transform was obtained heuristically, we had recently developed a natural technique to derive it. As an application, we succeeded in developing an NN on manifolds (noncompact symmetric spaces) and deriving the associated ridgelet transform.
View full abstract