Control problems defined over fixed finite time interval are discussed and some relations between stability, reachability and optimality are described. A typical problem in the deterministic dynamical systems is the “regulator problem” discussed by R.E. Kalman, but he solved it only in the case of linear systems without the constraints on control forces. Also in Kalman's solution, the time interval is free. Here, we discussed a regulator problem under the consideration of fixed finite time interval (α, β,
T)-BIBO-stability (|
ui(t)|≤α(
i=1, 2, …,
n) implies |
yi(t)|≤β(
i=1, 2, …,
m) for all
t∈[0,
T]) in both the linear and nonlinear cases. In the stationary systems with some suitable conditions, it is concluded that a certain bounded condition on the norm of
f(x(t)) for all
t∈[0,
T] is required as a sufficient condition to construct a regulator in the sense of (α, β,
T)-BIBO-stability.
In the stochastic systems, it is very natural to desire that the control satisfies the given stochastic stability condition (defined by H.J. Kushner) and, at the same time, the optimality condition for the prescribed performance index.
However, a control satisfying the stability fails to satisfy the optimality, and vice versa. We discussed this problem specially in a linear, constant coefficients stochastic system written in terms of Ito's stochastic differential equation for a quadratic performance index. Let the given stochastic stability be (ρ,
m, V(x(t)), T). We assume that there exists a matrix
L such that 0≤V (
x(t))≤
x'Lx for t∈[0,
T], then the non-positive condition of the matrix, composed of matrices in the system, the performance index and the matrix
L, combines the stochastic stability and the optimality.
View full abstract