Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Improved Knowledge Distillation with Dynamic Network Pruning
Download
index.pdf
Date
2022-9-30
Author
Şener, Eren
Akbaş, Emre
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
140
views
66
downloads
Cite This
Deploying convolutional neural networks to mobile or embedded devices is often prohibited by limited memory and computational resources. This is particularly problematic for the most successful networks, which tend to be very large and require long inference times. Many alternative approaches have been developed for compressing neural networks based on pruning, regularization, quantization or distillation. In this paper, we propose the “Knowledge Distillation with Dynamic Pruning” (KDDP), which trains a dynamically pruned compact student network under the guidance of a large teacher network. In KDDP, we train the student network with supervision from the teacher network, while applying L1 regularization on the neuron activations in a fully-connected layer. Subsequently, we prune inactive neurons. Our method automatically determines the final size of the student model. We evaluate the compression rate and accuracy of the resulting networks on an image classification dataset, and compare them to results obtained by Knowledge Distillation (KD). Compared to KD, our method produces better accuracy and more compact models.
Subject Keywords
Knowledge Distillation
,
Neural Network
,
Compression
,
Image Classification
,
Deep Neural Networks
URI
https://hdl.handle.net/11511/102747
Journal
Gazi Üniversitesi Fen Bilimleri Dergisi Part C: Tasarım ve Teknoloji
DOI
https://doi.org/10.29109/gujsc.1141648
Collections
Department of Computer Engineering, Article
Suggestions
OpenMETU
Core
Improved knowledge distillation with dynamic network pruning
Şener, Eren; Akbaş, Emre; Department of Computer Engineering (2019)
Deploying convolutional neural networks to mobile or embedded devices is often prohibited by limited memory and computational resources. This is particularly problematic for the most successful networks, which tend to be very large and require long inference times. In the past, many alternative approaches have been developed for compressing neural networks based on pruning, regularization, quantization or distillation. In this thesis, we propose the Knowledge Distillation with Dynamic Pruning (KDDP), which ...
Effect of quantization on the performance of deep networks
Kütükcü, Başar; Bozdağı Akar, Gözde.; Department of Electrical and Electronics Engineering (2020)
Deep neural networks performed greatly for many engineering problems in recent years. However, power and memory hungry nature of deep learning algorithm prevents mobile devices to benefit from the success of deep neural networks. The increasing number of mobile devices creates a push to make deep network deployment possible for resource-constrained devices. Quantization is a solution for this problem. In this thesis, different quantization techniques and their effects on deep networks are examined. The tech...
Improving classification performance of endoscopic images with generative data augmentation
Çağlar, Ümit Mert; Temizel, Alptekin; Department of Modeling and Simulation (2022-2-8)
The performance of a supervised deep learning model is highly dependent on the quality and variety of the images in the training dataset. In some applications, it may be impossible to obtain more images. Data augmentation methods have been proven to be successful in increasing the performance of deep learning models with limited data. Recent improvements on Generative Adversarial Networks (GAN) algorithms and structures resulted in improved image quality and diversity and made GAN training possible with lim...
A new approach to mathematical water quality modeling in reservoirs: Neural networks
Karul, C; Soyupak, S; Germen, E (1998-01-01)
Neural Networks are becoming more and more valuable tools for system modeling and function approximation as computing power of microcomputers increase. Modeling of complex ecological systems such as reservoir limnology is very difficult since the ecological interactions within a reservoir are difficult to define mathematically and are usually system specific. To illustrate the potential use of Neural Networks in ecological modeling, a software was developed to train the data from Keban Dam Reservoir by back...
Neural networks with piecewise constant argument and impact activation
Yılmaz, Enes; Akhmet, Marat; Department of Scientific Computing (2011)
This dissertation addresses the new models in mathematical neuroscience: artificial neural networks, which have many similarities with the structure of human brain and the functions of cells by electronic circuits. The networks have been investigated due to their extensive applications in classification of patterns, associative memories, image processing, artificial intelligence, signal processing and optimization problems. These applications depend crucially on the dynamical behaviors of the networks. In t...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
E. Şener and E. Akbaş, “Improved Knowledge Distillation with Dynamic Network Pruning,”
Gazi Üniversitesi Fen Bilimleri Dergisi Part C: Tasarım ve Teknoloji
, vol. 10, no. 3, pp. 650–665, 2022, Accessed: 00, 2023. [Online]. Available: https://hdl.handle.net/11511/102747.