Nonlinear Theory and Its Applications, IEICE
Online ISSN : 2185-4106
ISSN-L : 2185-4106
Regular Section
Enhancing computational efficiency of gradient descent in complex-valued Hopfield neural network through GPU parallelization
Takanori HashimotoTeijiro IsokawaMasaki KobayashiNaotake Kamiura
Author information
JOURNAL OPEN ACCESS

2025 Volume 16 Issue 1 Pages 197-207

Details
Abstract

This study focuses on accelerationg the learning process in complex-valued Hopfield neural network (CVHNN) models, an extension of the traditional Hopfield model with complex numbers in neuron states and synaptic weights. A major challenge for CVHNN models is the slow convergence of the gradient descent learning rule (GDLR). To solve this problem, we implemented a combination of parallel computation utilizing graphic processing units and complex-domain optimization techniques to boost the speed and stability of GDLR execution. This approach accelerated learning, enhanced memory capacity, and increased tolerance to noise.

Content from these authors
© 2025 The Institute of Electronics, Information and Communication Engineers

This article is licensed under a Creative Commons [Attribution-NonCommercial-NoDerivatives 4.0 International] license.
https://creativecommons.org/licenses/by-nc-nd/4.0/
Previous article
feedback
Top