Nonlinear Theory and Its Applications, IEICE
Online ISSN : 2185-4106
ISSN-L : 2185-4106
Regular Section
Enhancing computational efficiency of gradient descent in complex-valued Hopfield neural network through GPU parallelization
Takanori HashimotoTeijiro IsokawaMasaki KobayashiNaotake Kamiura
著者情報
ジャーナル オープンアクセス

2025 年 16 巻 1 号 p. 197-207

詳細
抄録

This study focuses on accelerationg the learning process in complex-valued Hopfield neural network (CVHNN) models, an extension of the traditional Hopfield model with complex numbers in neuron states and synaptic weights. A major challenge for CVHNN models is the slow convergence of the gradient descent learning rule (GDLR). To solve this problem, we implemented a combination of parallel computation utilizing graphic processing units and complex-domain optimization techniques to boost the speed and stability of GDLR execution. This approach accelerated learning, enhanced memory capacity, and increased tolerance to noise.

著者関連情報
© 2025 The Institute of Electronics, Information and Communication Engineers

This article is licensed under a Creative Commons [Attribution-NonCommercial-NoDerivatives 4.0 International] license.
https://creativecommons.org/licenses/by-nc-nd/4.0/
前の記事
feedback
Top