抄録
Learning control can achieve a desired output of a plant with high precision by modifing inputs iteratively from the error data obtained by the previous trial. There is a trade-off between the uncertainty of the plant and properties of error sequences such as convergence rates and maximum values of errors. A design method which can easily accomodate the trade-off is desired.
This paper presents a design method of a learning control system for single input single output discrete time systems. A performance index which can reflect the convergence rate and the effect of modelling errors on the convergence is introduced and a design problem is formulated as a least squares control problem. The optimal controller is given explicitly as a transfer function, which fits for to impliment on computer. The convergence of the sequence of output errors is guaranteed in a finite interval of time.