Effect of quantization on the performance of deep networks

Download
2020
Kütükcü, Başar
Deep neural networks performed greatly for many engineering problems in recent years. However, power and memory hungry nature of deep learning algorithm prevents mobile devices to benefit from the success of deep neural networks. The increasing number of mobile devices creates a push to make deep network deployment possible for resource-constrained devices. Quantization is a solution for this problem. In this thesis, different quantization techniques and their effects on deep networks are examined. The techniques are benchmarked by their success and memory requirements. The effects of quantization are examined for different network architectures including shallow, overparameterized, deep, residual, efficient models. Architecture specific problems are observed and related solutions are proposed. Quantized models are compared with ground-up efficiently designed models. The advantages and disadvantages of each technique are examined. Standard and quantized convolution operations implemented in real systems ranging from low power embedded systems to powerful desktop computer systems. Computation time and memory requirements are examined in these real systems.

Suggestions

Improved Knowledge Distillation with Dynamic Network Pruning
Şener, Eren; Akbaş, Emre (2022-9-30)
Deploying convolutional neural networks to mobile or embedded devices is often prohibited by limited memory and computational resources. This is particularly problematic for the most successful networks, which tend to be very large and require long inference times. Many alternative approaches have been developed for compressing neural networks based on pruning, regularization, quantization or distillation. In this paper, we propose the “Knowledge Distillation with Dynamic Pruning” (KDDP), which trains a dyn...
Effect of Annotation Errors on Drone Detection with YOLOv3
Köksal, Aybora; Alatan, Abdullah Aydın (2020-07-28)
Following the recent advances in deep networks, object detection and tracking algorithms with deep learning backbones have been improved significantly; however, this rapid development resulted in the necessity of large amounts of annotated labels. Even if the details of such semi-automatic annotation processes for most of these datasets are not known precisely, especially for the video annotations, some automated labeling processes are usually employed. Unfortunately, such approaches might result with erron...
A digital neuron realization for the random neural network model
CERKEZ, CUNEYT; AYBAY, HADİ IŞIK; Halıcı, Uğur (1997-06-12)
In this study the neuron of the random neural network (RNN) model (Gelenbe 1989) is designed using digital circuitry. In the RNN model, each neuron accumulates arriving pulses and can fire if its potential at a given instant of time is strictly positive. Firing occurs at random, the intervals between successive firing instants following an exponential distribution of constant rate. When a neuron fires, it routes the generated pulses to the appropriate output lines in accordance with the connection probabili...
Prediction of Nonlinear Drift Demands for Buildings with Recurrent Neural Networks
Kocamaz, Korhan; Binici, Barış; Tuncay, Kağan (2021-09-08)
Application of deep learning algorithms to the problems of structural engineering is an emerging research field. Inthis study, a deep learning algorithm, namely recurrent neural network (RNN), is applied to tackle a problemrelated to the assessment of reinforced concrete buildings. Inter-storey drift ratio profile of a structure is a quiteimportant parameter while conducting assessment procedures. In general, procedures require a series of timeconsuming nonlinear dynamic analysis. In this study, an extensiv...
An experimental comparison of symbolic and neural learning algorithms
Baykal, Nazife (1998-04-23)
In this paper comparative strengths and weaknesses of symbolic and neural learning algorithms are analysed. Experiments comparing the new generation symbolic algorithms and neural network algorithms have been performed using twelve large, real-world data sets.
Citation Formats
B. Kütükcü, “Effect of quantization on the performance of deep networks,” Thesis (M.S.) -- Graduate School of Natural and Applied Sciences. Electrical and Electronics Engineering., Middle East Technical University, 2020.