Nonlinear Theory and Its Applications, IEICE
Online ISSN : 2185-4106
ISSN-L : 2185-4106
Special Section on Stochastic and Quantum Computing
Quantization error-based regularization for hardware-aware neural network training
Kazutoshi HiroseRyota UematsuKota AndoKodai UeyoshiMasayuki IkebeTetsuya AsaiMasato MotomuraShinya Takamaeda-Yamazaki
著者情報
ジャーナル フリー

2018 年 9 巻 4 号 p. 453-465

詳細
抄録

We propose “QER”, a novel regularization strategy for hardware-aware neural network training. Although quantized neural networks reduce computation power and resource consumption, it also degrades the accuracy due to quantization errors of the numerical representation, which are defined as differences between original numbers and quantized numbers. The QER solves such the problem by appending an additional regularization term based on quantization errors of weights to the loss function. The regularization term forces the quantization errors of weights to be reduced as well as the original loss. We evaluate our method by using MNIST on a simple neural network model. The evaluation results show that the proposed approach achieves higher accuracy than the standard training approach with quantized forward propagation.

著者関連情報
© 2018 The Institute of Electronics, Information and Communication Engineers
前の記事 次の記事
feedback
Top