2025 年 61 巻 3 号 p. 104-114
This paper proposes a model compression method of reducing the number of nonlinear activation functions of continuous-time recurrent neural networks (RNNs). Ensuring the internal stability of the compressed RNN guarantees that of the original RNN. An error bound between the outputs of the compressed RNN and the original one is derived. Moreover, an optimization problem for reducing the bound is formulated, and it is relaxed to a semi-definite programming problem. Furthermore, it is shown that the proposed model compression method produces a compressed RNN whose output is close to that of the original one as a general tendency. The proposed method is demonstrated on a simple numerical example.