IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Online ISSN : 1745-1337
Print ISSN : 0916-8508

This article has now been updated. Please use the final version.

Accelerating CNN Inference with an Adaptive Quantization Method Using Computational Complexity-Aware Regularization
Kengo NAKATADaisuke MIYASHITAJun DEGUCHIRyuichi FUJIMOTO
Author information
JOURNAL FREE ACCESS Advance online publication

Article ID: 2023EAP1163

Details
Abstract

Quantization is commonly used to reduce the inference time of convolutional neural networks (CNNs). To reduce the inference time without drastically reducing accuracy, optimal bit widths need to be allocated for each layer or filter of the CNN. In conventional methods, the optimal bit allocation is obtained by using the gradient descent algorithm while minimizing the model size. However, the model size has little to no correlation with the inference time. In this paper, we present a computational-complexity metric called MAC×bit that is strongly correlated with the inference time of quantized CNNs. We propose a gradient descent-based regularization method that uses this metric for optimal bit allocation of a quantized CNN to improve the recognition accuracy and reduce the inference time. In experiments, the proposed method reduced the inference time of a quantized ResNet-18 model by 21.0% compared with the conventional regularization method based on model size while maintaining comparable recognition accuracy.

Content from these authors
© 2024 The Institute of Electronics, Information and Communication Engineers
feedback
Top