Gradient methods are fundamental algorithms for solving optimization problems whose various extensions have been developed to address large-scale problems. These methods require regularity conditions such as Lipschitz gradient continuity in order to ensure favorable convergence property. This article focuses on the analysis of gradient methods without such regularity conditions. We begin with the steepest descent method for unconstrained optimization problems where we introduce an Armijo-type backtracking line search. This backtracking is particularly suitable in this context, as we derive a subsequential convergence property toward a stationary point. We also extend this argument of the steepest descent method to proximal gradient methods for composite optimization problems and present two kinds of backtracking strategies. Moreover, we discuss generalizations of the proximal term like Bregman-type extensions.