Bulletin of the Japan Society for Industrial and Applied Mathematics
Online ISSN : 2432-1982
Invited Papers
Convergence of Gradient Methods without Regularity Conditions
Masaru ItoShotaro Yagishita
Author information
JOURNAL FREE ACCESS

2025 Volume 35 Issue 4 Pages 256-266

Details
Abstract

Gradient methods are fundamental algorithms for solving optimization problems whose various extensions have been developed to address large-scale problems. These methods require regularity conditions such as Lipschitz gradient continuity in order to ensure favorable convergence property. This article focuses on the analysis of gradient methods without such regularity conditions. We begin with the steepest descent method for unconstrained optimization problems where we introduce an Armijo-type backtracking line search. This backtracking is particularly suitable in this context, as we derive a subsequential convergence property toward a stationary point. We also extend this argument of the steepest descent method to proximal gradient methods for composite optimization problems and present two kinds of backtracking strategies. Moreover, we discuss generalizations of the proximal term like Bregman-type extensions.

Content from these authors
© 2025 The Japan Society for Industrial and Applied Mathematics
Previous article Next article
feedback
Top