抄録
The paper demonstrates that a complexity of dynamics in recurrent networks with N neurons can be controlled by our gradient methods. The complexity, i.e. the Lyapunov exponent, is calculated by observing the state transition for a long-term period T. One of the control methods is based on the conventional learning algorithms for the recurrent networks. This is the method with high-precision, but it requires O(N5T) expected time. To reduce the expensive run time, we propose another method based on the approximate relation between the complexity and a new parameter of the network configuration reported in our previous papers. This approximation requires only O(N2) run time. Simulation results show that the first method can control the exponent and that the approximation one can control the exponent under a restriction. The networks can learn not only the target time series, but also the exponent of the target by a combination method which is incorporated the proposed control method into the conventional learning algorithm.