Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Effect of quantization on the performance of deep networks
Download
index.pdf
Date
2020
Author
Kütükcü, Başar
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
317
views
170
downloads
Cite This
Deep neural networks performed greatly for many engineering problems in recent years. However, power and memory hungry nature of deep learning algorithm prevents mobile devices to benefit from the success of deep neural networks. The increasing number of mobile devices creates a push to make deep network deployment possible for resource-constrained devices. Quantization is a solution for this problem. In this thesis, different quantization techniques and their effects on deep networks are examined. The techniques are benchmarked by their success and memory requirements. The effects of quantization are examined for different network architectures including shallow, overparameterized, deep, residual, efficient models. Architecture specific problems are observed and related solutions are proposed. Quantized models are compared with ground-up efficiently designed models. The advantages and disadvantages of each technique are examined. Standard and quantized convolution operations implemented in real systems ranging from low power embedded systems to powerful desktop computer systems. Computation time and memory requirements are examined in these real systems.
Subject Keywords
Neural networks.
,
Keywords: Deep Neural Networks
,
Quantization
URI
http://etd.lib.metu.edu.tr/upload/12625535/index.pdf
https://hdl.handle.net/11511/45772
Collections
Graduate School of Natural and Applied Sciences, Thesis
Suggestions
OpenMETU
Core
Improved Knowledge Distillation with Dynamic Network Pruning
Şener, Eren; Akbaş, Emre (2022-9-30)
Deploying convolutional neural networks to mobile or embedded devices is often prohibited by limited memory and computational resources. This is particularly problematic for the most successful networks, which tend to be very large and require long inference times. Many alternative approaches have been developed for compressing neural networks based on pruning, regularization, quantization or distillation. In this paper, we propose the “Knowledge Distillation with Dynamic Pruning” (KDDP), which trains a dyn...
Effect of Annotation Errors on Drone Detection with YOLOv3
Köksal, Aybora; Alatan, Abdullah Aydın (2020-07-28)
Following the recent advances in deep networks, object detection and tracking algorithms with deep learning backbones have been improved significantly; however, this rapid development resulted in the necessity of large amounts of annotated labels. Even if the details of such semi-automatic annotation processes for most of these datasets are not known precisely, especially for the video annotations, some automated labeling processes are usually employed. Unfortunately, such approaches might result with erron...
A digital neuron realization for the random neural network model
CERKEZ, CUNEYT; AYBAY, HADİ IŞIK; Halıcı, Uğur (1997-06-12)
In this study the neuron of the random neural network (RNN) model (Gelenbe 1989) is designed using digital circuitry. In the RNN model, each neuron accumulates arriving pulses and can fire if its potential at a given instant of time is strictly positive. Firing occurs at random, the intervals between successive firing instants following an exponential distribution of constant rate. When a neuron fires, it routes the generated pulses to the appropriate output lines in accordance with the connection probabili...
Prediction of Nonlinear Drift Demands for Buildings with Recurrent Neural Networks
Kocamaz, Korhan; Binici, Barış; Tuncay, Kağan (2021-09-08)
Application of deep learning algorithms to the problems of structural engineering is an emerging research field. Inthis study, a deep learning algorithm, namely recurrent neural network (RNN), is applied to tackle a problemrelated to the assessment of reinforced concrete buildings. Inter-storey drift ratio profile of a structure is a quiteimportant parameter while conducting assessment procedures. In general, procedures require a series of timeconsuming nonlinear dynamic analysis. In this study, an extensiv...
An experimental comparison of symbolic and neural learning algorithms
Baykal, Nazife (1998-04-23)
In this paper comparative strengths and weaknesses of symbolic and neural learning algorithms are analysed. Experiments comparing the new generation symbolic algorithms and neural network algorithms have been performed using twelve large, real-world data sets.
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
B. Kütükcü, “Effect of quantization on the performance of deep networks,” Thesis (M.S.) -- Graduate School of Natural and Applied Sciences. Electrical and Electronics Engineering., Middle East Technical University, 2020.