抄録
Recently, the technology called GPGPU which treats not only graphic processing but also the general purpose calculation by using GPU is researched because the GPU is higher performance than CPU for the development of the 3DCG or the movie processing. GPU is the dedicated circuit to draw the graphics, so it has a characteristic that the many simple arithmetic circuits are implemented. This characteristic is hoped to apply not only graphic processing but also the massive parallelism. In this research, we apply the technology to the neural network learning, one of the intelligent signal processing. As the conventional research, we proposed the three methods to speed up the neural network learning. One of the method, the parallelization for the processing of the patterns, has the point that should be improved. In this paper, it is described that the update of the weight coefficient in the neurons is processed in the same time by changing the calculation of the patterns order. The proposed calculation method is evaluated with some test data sets. We confirmed that proposed method converge similarly to conventional method. Then, we proposed the optimal implementation method for the GPU. As the result, this proposed method is 3 to 6 times faster than the conventional method.