Transactions of the Institute of Systems, Control and Information Engineers
Online ISSN : 2185-811X
Print ISSN : 1342-5668
ISSN-L : 1342-5668
Paper
L2 Induced Norm Analysis for Nonnegative Input Signals and Its Application to Stability Analysis of Recurrent Neural Networks
Hayato MotookaYoshio Ebihara
Author information
JOURNAL FREE ACCESS

2022 Volume 35 Issue 2 Pages 29-37

Details
Abstract

A recurrent neural network (RNN) is a class of deep neural networks and able to imitate the behavior of dynamical systems due to its feedback mechanism. However, the feedback mechanism may cause network instability and hence the stability analysis of RNNs has been an important issue. From control theoretic viewpoint, we can readily apply the small gain theorem for the stability analysis of an RNN by representing it as a feedback connection with a linear time-invariant (LTI) system and a static nonlinear activation function typically being a rectified linear unit (ReLU). It is nonetheless true that the standard small gain theorem leads to conservative results since it does not care the important property that the ReLU returns nonnegative signals only. This motivates us to analyze the L2 induced norm of LTI systems for nonnegative input signals, which is referred to the L2+ induced norm in this paper. We characterize an upper bound of the L2+ induced norm by copositive programming, and then derive a numerically tractable semidefinite programming problem for (loosened) upper bound computation. We finally derive an L2+-induced-norm-based small gain theorem for the stability analysis of RNNs and illustrate its effectiveness by numerical examples.

Content from these authors
© 2022 The Institute of Systems, Control and Information Engineers
Previous article
feedback
Top