Transactions of the Society of Instrument and Control Engineers
Online ISSN : 1883-8189
Print ISSN : 0453-4654
ISSN-L : 0453-4654
An Approximate Method for Optimal Stationary Control of Nonlinear Stochastic Systems
Takayoshi NAKAMIZO
Author information
JOURNAL FREE ACCESS

1973 Volume 9 Issue 2 Pages 193-200

Details
Abstract
An approximate technique is presented for determining optimal stationary control of a class of nonlinear stochastic systems with a quadratic performance criterion. Since the stochastic nonlinear optimal control problem is in general unsolvable, a kind of approximation technique will be resorted to. The first task is to introduce a statistical equivalent linear system in the sense of second order statistics. Given a nonlinear dynamical system, the method for describing the equivalent linear system is given by using the Gaussian assumption. This can be viewed as an extension of the statistical linearization technique of Booton type, and an advantageous use can be made even for a dynamical system subjected to the state dependent noise. Next the parameter optimization technique is used to determine the sub-optimal control law, using the equivalent system. The paper covers cases both with and without the bounded constraint of control amplitudes. For the unbounded case, the resultant control is linear. A computational algorithm and certain discussions on the statistical linearization technique, which appear to have been neglected previously, are also included. For the bounded case, the resultant control must be nonlinear. Thus the control problem is split into two separate problems: the parameter optimization problem and the inverse problem of linearization. By the parameter optimization, the feedback gain is found to depend on the state covariance, and then the bounded control rule can be determined by using the non-uniqueness of the inverse problem solution.
Content from these authors
© The Society of Instrument and Control Engineers (SICE)
Previous article Next article
feedback
Top