Abstract
In this paper we consider an optimal control problem for partially observable Markov decision processes with finite states, signals and actions over an infinite horizon. It is shown that there are <isins>-optimal piecewise-linear value functions and piecewise-constant policies which are simple. Simple means that there are only finitely many pieces, each of which is defined on a convex polyhedral set. An algorithm based on the method of successive approximation is developed to compute <isins>-optimal policy and <isins>-optimal cost. Furthermore, a special class of stationary policies, called finitely transient, will be considered. It will be shown that such policies have attractive properties which enable us to convert a partially observable Markov decision chain into a usual finite state Markov one.