IEEJ Transactions on Electronics, Information and Systems
Online ISSN : 1348-8155
Print ISSN : 0385-4221
ISSN-L : 0385-4221
Learning Rules for Recurrent Neural Networks using Perturbation and their Application to Neuro-Control
Yutaka MaedaYakichi Kanata
Author information

1993 Volume 113 Issue 6 Pages 402-408


This paper proposes two learning rules for the recurrent neural networks using an ordinary simple perturbation and a simultaneous perturbation.
The first learning rule uses the simple perturbation signals. Then, we can obtain values of an evaluation function with and without perturbation. By using a difference approximation, the rule estimates the partial derivatives of the evaluation function. The estimators are used to update all weights. On the other hand, the second one uses the simultaneous perturbation where perturbations are added to all weights simultaneously. By using a difference approximation, we obtain the updating quantities for all weights.
When a neural network is used as a direct controller for an unknown plant, a desired signal corresponding to an output of the plant is usually given. Generally, the evaluation function is squared error between the desired output and the practical output of the plant. Therefore, we need a sensitivity function of the plant in order to obtain the updating quantities for all weights. However, the proposed rules are applicable to this problem without this information about the plant. Some numerical results are shown.

Information related to the author
© The Institute of Electrical Engineers of Japan
Previous article Next article