Abstract
Following the idea of Huber's maximum likelihood-type estimator (M-estimator), Polyak and Tsypkin proposed a robust recursive identification method analogous to the recursive least squares method. They seek for an estimator minimizing the maximum asymptotic variance over some convex class of innovation distributions. However, they did not discuss its convergence properties throughly because of the difficulties due to the introduction of some approximations in the identification method and correlations between observations. In this paper, we discuss the convergence properties of the robust recursive identification method for linear stochastic models from the viewpoints of ODE approach and martingale convergence theory. We present two convergence theorems for the robust identification method.