システム制御情報学会論文誌
Online ISSN : 2185-811X
Print ISSN : 1342-5668
ISSN-L : 1342-5668
“メモリー”付きニューロン (慣性系ニューロン) を用いた畳み込み積分型勾配法のニューラルネットワーク実現
堀江 亮太相吉 英太郎石井 秀教
著者情報
ジャーナル フリー

1998 年 11 巻 3 号 p. 112-119

詳細
抄録

First, a new type of models for trajectory methods to solve optimization problems is considered. In this new model, a velocity of the trajectory is given by a convolution integral form with all gradients of the minimization function on the trajectory for the past time. The new trajectory method can be called a gradient method with the optimizer's “memory” with respect to the past gradient information, and the model can be transformed into the second order differential equation model whose trajectory tides over trapping into local optima under a suitable initial velocity.
Next, in order to solve quadratic programming problems with variables constrained on the closed interval [0, 1] 's, the gradient method with “memory” is realized by neural networks as operational circuits composed of neurons, each of which has two integral elements. The trajectory by realized neural networks has possibility to overcome trapping into local minima, while the Hopfield type with first order differential equation model traps into them.
Last, the numerical simulation results for simple test problems demonstrate properties of these presented neural networks.

著者関連情報
© システム制御情報学会
前の記事 次の記事
feedback
Top