Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Exploiting spatial redundancy in feature maps to accelerate convolutional neural networks
Download
index.pdf
Date
2024-8-06
Author
Ulaş, Muhammed Yasin
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
82
views
51
downloads
Cite This
The field of computer vision has changed dramatically with the advent of convolutional neural networks (CNNs). Since then, they have achieved superior performance than previous methods in various tasks, such as image classification, object detection, and instance segmentation. However, they are computationally intensive and require significant computational resources, which hinders their deployment on devices with limited hardware. Moreover, their energy consumption and carbon footprint have become an important issue. As a result, researchers have come up with many methods for accelerating CNNs. In this study, we propose a new type of convolution operation called Redundancy-Aware Convolution (RAConv), which skips processing patches in the feature map that are considered redundant to accelerate convolutional layers of CNNs. To test the proposed method, we first train the VGG-11 model on the Imagenette dataset as the baseline model. Then, we replace one or more convolutional layers of VGG-11 with RAConv layers, train the model with the same hyperparameters, and compare their inference performance on the CPU. The experimental results show that an individual layer achieves a speedup of 2.7x without a drop in accuracy, and multiple layers achieve an overall speedup of 1.2x with a drop of 0.9% in accuracy.
Subject Keywords
Deep learning
,
Convolutional neural networks
,
Spatial redundancy
URI
https://hdl.handle.net/11511/110919
Collections
Graduate School of Natural and Applied Sciences, Thesis
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
M. Y. Ulaş, “Exploiting spatial redundancy in feature maps to accelerate convolutional neural networks,” M.S. - Master of Science, Middle East Technical University, 2024.