2025 Volume 16 Issue 1 Pages 197-207
This study focuses on accelerationg the learning process in complex-valued Hopfield neural network (CVHNN) models, an extension of the traditional Hopfield model with complex numbers in neuron states and synaptic weights. A major challenge for CVHNN models is the slow convergence of the gradient descent learning rule (GDLR). To solve this problem, we implemented a combination of parallel computation utilizing graphic processing units and complex-domain optimization techniques to boost the speed and stability of GDLR execution. This approach accelerated learning, enhanced memory capacity, and increased tolerance to noise.