2019 Volume 23 Issue 4 Pages 151-154
The most successful generative tasks, such as image completion, generally rely upon generative adversarial networks (GANs). The hardware implementation of GANs has important requirements of low power and acceleration, but differ from a usual neural network by requiring a training phase. We developed a hardware-oriented training algorithm using a quantized stochastic gradient descent method without the use of a hardware-oriented training algorithm. From this result, we devised a GANs architecture requiring 7 bits for inference and 26 bits for the training phase, when using a resized MNIST dataset with a three layer perceptron for each network. We can achieve real-time processing when the architecture functions ideally; however, the bit width and process speed are affected by the network model.