2023 Volume 59 Issue 8 Pages 353-361
This paper proposes a deep neural network with module architecture for model reduction, and a cost function suitable for training the model. In the proposed model architecture, each layer is modularized for reducing the model by adjusting the number of layer. This feature allows the computational load of the model to be quickly switched. In order to maintain the accuracy of the reduced model even if it is not retrained, the cost function is defined as a weighted average of the errors of the model output over the number of layers. The effectiveness of the proposed method is validated through numerical examples for small tasks. Our implementation is available at https://github.com/sy-takano/modularized_dnn_for_model_reduction.