The Proceedings of Design & Systems Conference
Online ISSN : 2424-3078
2017.27
Session ID : 1104
Conference information

Research on efficiency improvement by parameter adjustment in large-scale problem optimization
*Akinori KUBO*Masao ARAKAWA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract
Mathematical programming is one of the means for obtaining a solution of a certain objective function, and there are a gradient method and a Newton method. The steepest descent method is said to be the most fundamental method, and is a method of lowering the objective function along the inclination direction of the objective function. As an application method of this steepest descent method, there is what is called an acceleration gradient method. In the steepest descent method, a solution is obtained when ∇f = 0, but as the solution approaches, the amount of change decreases and it takes time to reach the solution. However, in the acceleration method, even when approaching the solution It has the advantage that quantity does not decrease greatly. From this, it can be expected that the time of analysis and the number of calculations are greatly shortened. The Newton's method is a method of performing analysis using the second derivative of the objective function. There is what is called a quasi-Newton method as an application of Newton's method. The quasi-Newton method is a method that can be expected to obtain a solution only with the information of the first differentiation. This makes it possible to obtain a solution even if it is difficult to obtain the second order differential. In this research, we propose a method combining these two mathematical programming methods. In addition, by using PSO as parameter adjustment, we find the best parameter, aiming at optimal solution while comparatively reducing the number of calculations.
Content from these authors
© 2017 The Japan Society of Mechanical Engineers
Previous article Next article
feedback
Top