IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Regular Section
POEM: Pruning with Output Error Minimization for Compressing Deep Neural Networks
Koji KAMMAToshikazu WADA
Author information
JOURNAL FREE ACCESS

2025 Volume E108.D Issue 7 Pages 734-743

Details
Abstract

Deep Neural Networks (DNNs) are dominant in the field of Computer Vision (CV). Although DNN models show state-of-the-art performance in various CV tasks, using such models on resource-limited equipment (mobile phones, in-vehicle cameras, and so on) is challenging. Therefore, techniques of compressing DNN models without significant accuracy loss is desired. Pruning is one such technique that removes redundant neurons (or channels). In this paper, we present Pruning with Output Error Minimization (POEM). POEM has two steps: pruning and reconstruction. In the pruning step, the importance of neurons is evaluated, and the unimportant neurons are selected and removed. In the reconstruction step, the weights of the remaining neurons are tuned to compensate the error caused by pruning so that the model accuracy can be well preserved. The advantage of POEM over the previous methods is that both neuron selection and reconstruction is done based on the output error of the activation functions. On the other hand, the previous methods minimize the error before the activation functions. The experiments on the well-known DNN models (VGG, ResNet, and MobileNet) and the image recognition datasets (ImageNet, CUB-200-2011, and CIFAR-10) were conducted. The results show that POEM significantly outperforms the previous methods in maintaining the accuracy of the compressed models.

Content from these authors
© 2025 The Institute of Electronics, Information and Communication Engineers
Previous article Next article
feedback
Top