2017 Volume 27 Issue 1 Pages 21-30
This paper deals with Riemannian optimization, that is, optimization on Riemannian manifolds. Theories of Euclidean optimization and Riemannian manifolds are first briefly reviewed together with some simple and motivating examples, followed by the Riemannian optimization theory. Retractions and vector transports on Riemannian manifolds are introduced according to the literature to describe a general Riemannian optimization algorithm. Recent convergence analysis results of several types of Riemannian conjugate gradient methods, such as Fletcher-Reeves and Dai-Yuan-types, are then given and discussed in detail. Some applications of Riemannian optimization to problems of current interest, such as 1)singular value decomposition in numerical linear algebra; 2)canonical correlation analysis and topographic independent component analysis as statistical methods; 3)low-rank tensor completion for machine learning; 4)optimal model reduction in control theory; and 5)doubly stochastic inverse eigenvalue problem, are also introduced.