Exploiting spatial redundancy in feature maps to accelerate convolutional neural networks

Download
2024-8-06
Ulaş, Muhammed Yasin
The field of computer vision has changed dramatically with the advent of convolutional neural networks (CNNs). Since then, they have achieved superior performance than previous methods in various tasks, such as image classification, object detection, and instance segmentation. However, they are computationally intensive and require significant computational resources, which hinders their deployment on devices with limited hardware. Moreover, their energy consumption and carbon footprint have become an important issue. As a result, researchers have come up with many methods for accelerating CNNs. In this study, we propose a new type of convolution operation called Redundancy-Aware Convolution (RAConv), which skips processing patches in the feature map that are considered redundant to accelerate convolutional layers of CNNs. To test the proposed method, we first train the VGG-11 model on the Imagenette dataset as the baseline model. Then, we replace one or more convolutional layers of VGG-11 with RAConv layers, train the model with the same hyperparameters, and compare their inference performance on the CPU. The experimental results show that an individual layer achieves a speedup of 2.7x without a drop in accuracy, and multiple layers achieve an overall speedup of 1.2x with a drop of 0.9% in accuracy.
Citation Formats
M. Y. Ulaş, “Exploiting spatial redundancy in feature maps to accelerate convolutional neural networks,” M.S. - Master of Science, Middle East Technical University, 2024.