2018 Volume 9 Issue 4 Pages 453-465
We propose “QER”, a novel regularization strategy for hardware-aware neural network training. Although quantized neural networks reduce computation power and resource consumption, it also degrades the accuracy due to quantization errors of the numerical representation, which are defined as differences between original numbers and quantized numbers. The QER solves such the problem by appending an additional regularization term based on quantization errors of weights to the loss function. The regularization term forces the quantization errors of weights to be reduced as well as the original loss. We evaluate our method by using MNIST on a simple neural network model. The evaluation results show that the proposed approach achieves higher accuracy than the standard training approach with quantized forward propagation.